lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Fri, 28 Jun 2024 11:17:43 +0800
From: Ian Kent <ikent@...hat.com>
To: Christian Brauner <brauner@...nel.org>, Jan Kara <jack@...e.cz>
Cc: Matthew Wilcox <willy@...radead.org>,
 Lucas Karpinski <lkarpins@...hat.com>, viro@...iv.linux.org.uk,
 raven@...maw.net, linux-fsdevel@...r.kernel.org,
 linux-kernel@...r.kernel.org, Alexander Larsson <alexl@...hat.com>,
 Eric Chanudet <echanude@...hat.com>
Subject: Re: [RFC v3 1/1] fs/namespace: remove RCU sync for MNT_DETACH umount


On 27/6/24 23:16, Christian Brauner wrote:
> On Thu, Jun 27, 2024 at 01:54:18PM GMT, Jan Kara wrote:
>> On Thu 27-06-24 09:11:14, Ian Kent wrote:
>>> On 27/6/24 04:47, Matthew Wilcox wrote:
>>>> On Wed, Jun 26, 2024 at 04:07:49PM -0400, Lucas Karpinski wrote:
>>>>> +++ b/fs/namespace.c
>>>>> @@ -78,6 +78,7 @@ static struct kmem_cache *mnt_cache __ro_after_init;
>>>>>    static DECLARE_RWSEM(namespace_sem);
>>>>>    static HLIST_HEAD(unmounted);	/* protected by namespace_sem */
>>>>>    static LIST_HEAD(ex_mountpoints); /* protected by namespace_sem */
>>>>> +static bool lazy_unlock = false; /* protected by namespace_sem */
>>>> That's a pretty ugly way of doing it.  How about this?
>>> Ha!
>>>
>>> That was my original thought but I also didn't much like changing all the
>>> callers.
>>>
>>> I don't really like the proliferation of these small helper functions either
>>> but if everyone
>>>
>>> is happy to do this I think it's a great idea.
>> So I know you've suggested removing synchronize_rcu_expedited() call in
>> your comment to v2. But I wonder why is it safe? I *thought*
>> synchronize_rcu_expedited() is there to synchronize the dropping of the
>> last mnt reference (and maybe something else) - see the comment at the
>> beginning of mntput_no_expire() - and this change would break that?
> Yes. During umount mnt->mnt_ns will be set to NULL with namespace_sem
> and the mount seqlock held. mntput() doesn't acquire namespace_sem as
> that would get rather problematic during path lookup. It also elides
> lock_mount_hash() by looking at mnt->mnt_ns because that's set to NULL
> when a mount is actually unmounted.
>
> So iirc synchronize_rcu_expedited() will ensure that it is actually the
> system call that shuts down all the mounts it put on the umounted list
> and not some other task that also called mntput() as that would cause
> pretty blatant EBUSY issues.
>
> So callers that come before mnt->mnt_ns = NULL simply return of course
> but callers that come after mnt->mnt_ns = NULL will acquire
> lock_mount_hash() _under_ rcu_read_lock(). These callers see an elevated
> reference count and thus simply return while namespace_lock()'s
> synchronize_rcu_expedited() prevents the system call from making
> progress.
>
> But I also don't see it working without risk even with MNT_DETACH. It
> still has potential to cause issues in userspace. Any program that
> always passes MNT_DETACH simply to ensure that even in the very rare
> case that a mount might still be busy is unmounted might now end up
> seeing increased EBUSY failures for mounts that didn't actually need to
> be unmounted with MNT_DETACH. In other words, this is only inocuous if
> userspace only uses MNT_DETACH for stuff they actually know is busy when
> they're trying to unmount. And I don't think that's the case.
>
I'm sorry but how does an MNT_DETACH umount system call return EBUSY, I 
can't

see how that can happen?


I have used lazy umount a lot over the years and I haven't had problems 
with it.

There is a tendency to think there might be problems using it but I've 
never been

able to spot them.


Ian


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ