lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <53ECA81D.7040805@parallels.com>
Date:	Thu, 14 Aug 2014 16:14:21 +0400
From:	Maxim Patlasov <mpatlasov@...allels.com>
To:	Miklos Szeredi <miklos@...redi.hu>
CC:	fuse-devel <fuse-devel@...ts.sourceforge.net>,
	Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 0/5] fuse: close file synchronously (v2)

On 08/13/2014 04:44 PM, Miklos Szeredi wrote:
> On Fri, Jun 6, 2014 at 3:27 PM, Maxim Patlasov <MPatlasov@...allels.com> wrote:
>> Hi,
>>
>> There is a long-standing demand for synchronous behaviour of fuse_release:
>>
>> http://sourceforge.net/mailarchive/message.php?msg_id=19343889
>> http://sourceforge.net/mailarchive/message.php?msg_id=29814693
>>
>> A year ago Avati and me explained why such a feature would be useful:
>>
>> http://sourceforge.net/mailarchive/message.php?msg_id=29889055
>> http://sourceforge.net/mailarchive/message.php?msg_id=29867423
>>
>> In short, the problem is that fuse_release (that's called on last user
>> close(2)) sends FUSE_RELEASE to userspace and returns without waiting for
>> ACK from userspace. Consequently, there is a gap when user regards the
>> file released while userspace fuse is still working on it. An attempt to
>> access the file from another node leads to complicated synchronization
>> problems because the first node still "holds" the file.
> Tying RELEASE to close(2) is not going to work.  Look at all the
> places that call fput() (or fdput() in recent kernels), those are all
> potential triggers for RELEASE, some realistic, some not quite, but
> all are certainly places that a synchronous release could block
> *instead* of close.
>
> Which just means, that close will still be asynchronous with release
> some of the time.  So it's not clear to me what is to be gained from
> this patchset.

The patch-set doesn't tie RELEASE to close(2), it ensures that we report 
to user space exactly the last fput(). That's correct because this is 
exactly the moment when any file system with sharing mode connected to 
open/close must drop sharing mode. This is the case even for some local 
filesystems, for example, ntfs-3g.

Could you please look closely at your commit 
5a18ec176c934ca1bc9dc61580a5e0e90a9b5733. It actually implemented two 
different things: 1) synchronous release and 2) delayed path_put. The 
latter was well explained by the comment:

 >        /*
 >         * If this is a fuseblk mount, then it's possible that
 >         * releasing the path will result in releasing the
 >         * super block and sending the DESTROY request.  If
 >         * the server is single threaded, this would hang.
 >         * For this reason do the path_put() in a separate
 >         * thread.
 >         */

So it's clear why the delay needed and why it's bound to fuseblk 
condition. But synchronous close was made under the same condition, 
which is obviously wrong. I understand why you made that decision in 
2011: otherwise, we could block in a wrong context (last decrement of 
ff->count might happen in scope of read-ahead or mmap-ed writeback). But 
now, with the approach implemented in this patch-set, this is impossible 
-- we wait for completion of all async operations before triggering 
synchronous release. Thus the patch-set untie a functionality which 
already existed before (synchronous release) from wrong condition 
(fuseblk mount) putting it under well-defined control (FUSE_CLOSE_WAIT).

Thanks,
Maxim
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists