lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Fri, 28 Jun 2024 17:13:10 +0200
From: Alexander Larsson <alexl@...hat.com>
To: Christian Brauner <brauner@...nel.org>
Cc: Ian Kent <ikent@...hat.com>, Jan Kara <jack@...e.cz>, Matthew Wilcox <willy@...radead.org>, 
	Lucas Karpinski <lkarpins@...hat.com>, viro@...iv.linux.org.uk, raven@...maw.net, 
	linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org, 
	Eric Chanudet <echanude@...hat.com>
Subject: Re: [RFC v3 1/1] fs/namespace: remove RCU sync for MNT_DETACH umount

On Fri, Jun 28, 2024 at 2:54 PM Christian Brauner <brauner@...nel.org> wrote:
>
> On Fri, Jun 28, 2024 at 11:17:43AM GMT, Ian Kent wrote:
> >
> > On 27/6/24 23:16, Christian Brauner wrote:
> > > On Thu, Jun 27, 2024 at 01:54:18PM GMT, Jan Kara wrote:
> > > > On Thu 27-06-24 09:11:14, Ian Kent wrote:
> > > > > On 27/6/24 04:47, Matthew Wilcox wrote:
> > > > > > On Wed, Jun 26, 2024 at 04:07:49PM -0400, Lucas Karpinski wrote:
> > > > > > > +++ b/fs/namespace.c
> > > > > > > @@ -78,6 +78,7 @@ static struct kmem_cache *mnt_cache __ro_after_init;
> > > > > > >    static DECLARE_RWSEM(namespace_sem);
> > > > > > >    static HLIST_HEAD(unmounted);    /* protected by namespace_sem */
> > > > > > >    static LIST_HEAD(ex_mountpoints); /* protected by namespace_sem */
> > > > > > > +static bool lazy_unlock = false; /* protected by namespace_sem */
> > > > > > That's a pretty ugly way of doing it.  How about this?
> > > > > Ha!
> > > > >
> > > > > That was my original thought but I also didn't much like changing all the
> > > > > callers.
> > > > >
> > > > > I don't really like the proliferation of these small helper functions either
> > > > > but if everyone
> > > > >
> > > > > is happy to do this I think it's a great idea.
> > > > So I know you've suggested removing synchronize_rcu_expedited() call in
> > > > your comment to v2. But I wonder why is it safe? I *thought*
> > > > synchronize_rcu_expedited() is there to synchronize the dropping of the
> > > > last mnt reference (and maybe something else) - see the comment at the
> > > > beginning of mntput_no_expire() - and this change would break that?
> > > Yes. During umount mnt->mnt_ns will be set to NULL with namespace_sem
> > > and the mount seqlock held. mntput() doesn't acquire namespace_sem as
> > > that would get rather problematic during path lookup. It also elides
> > > lock_mount_hash() by looking at mnt->mnt_ns because that's set to NULL
> > > when a mount is actually unmounted.
> > >
> > > So iirc synchronize_rcu_expedited() will ensure that it is actually the
> > > system call that shuts down all the mounts it put on the umounted list
> > > and not some other task that also called mntput() as that would cause
> > > pretty blatant EBUSY issues.
> > >
> > > So callers that come before mnt->mnt_ns = NULL simply return of course
> > > but callers that come after mnt->mnt_ns = NULL will acquire
> > > lock_mount_hash() _under_ rcu_read_lock(). These callers see an elevated
> > > reference count and thus simply return while namespace_lock()'s
> > > synchronize_rcu_expedited() prevents the system call from making
> > > progress.
> > >
> > > But I also don't see it working without risk even with MNT_DETACH. It
> > > still has potential to cause issues in userspace. Any program that
> > > always passes MNT_DETACH simply to ensure that even in the very rare
> > > case that a mount might still be busy is unmounted might now end up
> > > seeing increased EBUSY failures for mounts that didn't actually need to
> > > be unmounted with MNT_DETACH. In other words, this is only inocuous if
> > > userspace only uses MNT_DETACH for stuff they actually know is busy when
> > > they're trying to unmount. And I don't think that's the case.
> > >
> > I'm sorry but how does an MNT_DETACH umount system call return EBUSY, I
> > can't
> >
> > see how that can happen?
>
> Not the umount() call is the problem. Say you have the following
> sequence:
>
> (1) mount(ext4-device, /mnt)
>     umount(/mnt, 0)
>     mount(ext4-device, /mnt)
>
> If that ext4 filesystem isn't in use anymore then umount() will succeed.
> The same task can immediately issue a second mount() call on the same
> device and it must succeed.
>
> Today the behavior for this is the same whether or no the caller uses
> MNT_DETACH. So:
>
> (2) mount(ext4-device, /mnt)
>     umount(/mnt, MNT_DETACH)
>     mount(ext4-device, /mnt)
>
> All that MNT_DETACH does is to skip the check for busy mounts otherwise
> it's identical to a regular umount. So (1) and (2) will behave the same
> as long as the filesystem isn't used anymore.
>
> But afaict with your changes this wouldn't be true anymore. If someone
> uses (2) on a filesystem that isn't busy then they might end up getting
> EBUSY on the second mount. And if I'm right then that's potentially a
> rather visible change.

This is rather unfortunate, as the synchronize_rcu call is quite
expensive. In particular on a real-time kernel where there are no
expedited RCUs. This is causing container startup to be slow, as there
are several umount(MNT_DETACH) happening during container setup (after
the pivot_root, etc).

Maybe we can add a umount flag for users that don't need the current
behaviour wrt EBUSY? In the container usecase the important part is
that the old mounts are disconnected from the child namespace and not
really what the mount busy state is (typically it is still mounted in
the parent namespace anyway).

-- 
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
 Alexander Larsson                                Red Hat, Inc
       alexl@...hat.com         alexander.larsson@...il.com


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ