lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 27 May 2014 12:25:22 -0500
From:	Jeff Smith <jsmith.lkml@...il.com>
To:	linux-kernel@...r.kernel.org
Subject: inotify_rm_watch() user-space safety requirements?

inotify's behavior concerning events from removed watches (they do
happen) and watch descriptor reuse (beyond my knowledge) is currently
undocumented.

Although it mimics a standard multiplexing interface in most regards,
writing a robust user-space handler is comparatively more complex due
to the atypical delivery of "stale" wd events preceding an IN_IGNORE
event and a lack of guarantees about how quickly a wd can be reused
via inotify_add_watch(). Not being familiar with inotify/fsnotify
internals, it's not trivially obvious to me how the fsnotify_group
management is being done. Up to the present, I've maintained queues of
"dead" wd wrappers (or at least a counter) to filter stale events, but
I am clueless whether or not this is overkill.

If removed descriptors are reserved until the IN_IGNORE event is
drained from the read queue, could that be formally guaranteed? If
it's not, is it functionality that could ever reasonably be expected
to be added, short of some other form of new (optional?)
queue-filter-on-rm functionality? It's my experience that the
asynchronous handling of watch removals is a cost that seldom serves
much user benefit.

Regards,
Jeff
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ