lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 8 Dec 2015 06:49:41 -0500
From:	Greg Kroah-Hartman <gregkh@...uxfoundation.org>
To:	Roman Pen <r.peniaev@...il.com>
Cc:	linux-kernel@...r.kernel.org
Subject: Re: [RFC PATCH 0/3] debugfs: implement
 'debugfs_create_dir_with_tmpfiles()'

On Tue, Dec 08, 2015 at 10:51:03AM +0100, Roman Pen wrote:
> Hello.
> 
> Here is an attempt to solve annoying race, which exists between two operations
> on debugfs entries: write (setting a request) and read (reading a response).
> 
> E.g. let's assume that we have some storage device, which can have thousands
> of snapshots (yeah, plenty of them, thus it is ridicoulous to create a debugfs
> entry for each), and each snapshot is controlled by the handle, which is a UUID
> or any non-numeric character sequence (for numeric sequence this problem can be
> solved by 'seek' operation).  This device provides a debugfs entry 'snap_status',
> which can be opened for reading and writing, where write - is an operation for
> specifiying a request, and read - is an operation for getting a response back.
> 
> I.e. it is obvious, that to request a status of a snapshot you have to write a
> UUID first of a snapshot and then read back a status response back, so the
> sequence can be the following:
> 
>   # echo $UUID > /sys/kernel/debug/storage/snap_status
>   # cat /sys/kernel/debug/storage/snap_status
> 
> Between those two operations a race exists, and if someone else comes and
> requests status for another snapshot, the first requester will get incorrect
> data.
> 
> An atomic request-set and response-read solution can be the following:
> 
>   # cat /sys/kernel/debug/storage/snap_status/$UUID
> 
> Here debugfs creates non-existent temporary entry on demand with the $UUID
> name and eventually calls file operations, which were passed to the
> 'debugfs_create_dir_with_tmpfiles()' function.  Caller of that function can
> control the correctness of the file name in 'i_fop->open' callback and can
> return an error if temporary file name does not match some format.
> 
> Temporary file, which is created, will not appear in any lookups, further
> linking is forbidden, corresponding dentry and inode will be freed when last
> file descriptor is closed (see O_TMPFILE, with the only difference is that
> debugfs temporary dentry has a name).
> 
> Of course this file creation on demand can be applied to many other cases,
> where it is impossible to create as many debugfs entries as objects exist,
> but atomicity of read-write can be required.
> 
> This atomicity can be achieved also by locking from userspace, but that approach
> increases complexity and makes it hardly possible to invoke only few commands
> from command line, like 'echo' or 'cat'.
> 
> So basically creating a temporary file on demand with a specified name is a
> way to provide one additional parameter for an 'read' operation.
> 
> Probably, there is more elegant solution for that write-read race problem,
> but I've not found any.
> 
> PS. I did not want to use configfs, because I have nothing to configure (what
>     I have described is not a configuration issue), and I do not like to keep
> 	dentries in a system if userspace forgets to remove them.

Do you have a patch series that depends on these new apis?  I don't want
to add things to debugfs without any in-tree users if at all possible.

thanks,

greg k-h
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ