[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YhAKV5cjXEz2JTBB@zeniv-ca.linux.org.uk>
Date: Fri, 18 Feb 2022 21:06:31 +0000
From: Al Viro <viro@...iv.linux.org.uk>
To: Rik van Riel <riel@...riel.com>
Cc: linux-kernel@...r.kernel.org, kernel-team@...com,
linux-fsdevel@...r.kernel.org, paulmck@...nel.org,
gscrivan@...hat.com, Eric Biederman <ebiederm@...ssion.com>,
Chris Mason <clm@...com>
Subject: Re: [PATCH 1/2] vfs: free vfsmount through rcu work from kern_unmount
On Fri, Feb 18, 2022 at 08:24:09PM +0000, Al Viro wrote:
> On Fri, Feb 18, 2022 at 07:43:43PM +0000, Al Viro wrote:
> > On Fri, Feb 18, 2022 at 02:33:31PM -0500, Rik van Riel wrote:
> > > On Fri, 2022-02-18 at 19:26 +0000, Al Viro wrote:
> > > > On Fri, Feb 18, 2022 at 01:31:13PM -0500, Rik van Riel wrote:
> > > > > After kern_unmount returns, callers can no longer access the
> > > > > vfsmount structure. However, the vfsmount structure does need
> > > > > to be kept around until the end of the RCU grace period, to
> > > > > make sure other accesses have all gone away too.
> > > > >
> > > > > This can be accomplished by either gating each kern_unmount
> > > > > on synchronize_rcu (the comment in the code says it all), or
> > > > > by deferring the freeing until the next grace period, where
> > > > > it needs to be handled in a workqueue due to the locking in
> > > > > mntput_no_expire().
> > > >
> > > > NAK. There's code that relies upon kern_unmount() being
> > > > synchronous. That's precisely the reason why MNT_INTERNAL
> > > > is treated that way in mntput_no_expire().
> > >
> > > Fair enough. Should I make a kern_unmount_rcu() version
> > > that gets called just from mq_put_mnt()?
> >
> > Umm... I'm not sure you can afford having struct ipc_namespace
> > freed and reused before the mqueue superblock gets at least to
> > deactivate_locked_super().
>
> BTW, that's a good demonstration of the problems with making those
> beasts async. struct mount is *not* accessed past kern_unmount(),
> but the objects used by the superblock might very well be - in
> this case they (struct ipc_namespace, pointed to by s->s_fs_data)
> are freed by the caller after kern_unmount() returns. And possibly
> reused. Now note that they are used as search keys by
> mqueue_get_tree() and it becomes very fishy.
>
> If you want to go that way, make it something like
>
> void put_ipc_ns(struct ipc_namespace *ns)
> {
> if (refcount_dec_and_lock(&ns->ns.count, &mq_lock)) {
> mq_clear_sbinfo(ns);
> spin_unlock(&mq_lock);
> kern_unmount_rcu(ns->mq_mnt);
> }
> }
>
> and give mqueue this for ->kill_sb():
>
> static void mqueue_kill_super(struct super_block *sb)
> {
> struct ipc_namespace *ns = sb->s_fs_info;
> kill_litter_super(sb);
> do the rest of free_ipc_ns();
> }
>
> One thing: kern_unmount_rcu() needs a very big warning about
> the caution needed from its callers. It's really not safe
> for general use, and it will be a temptation for folks with
> scalability problems like this one to just use it instead of
> kern_unmount() and declare the problem solved.
FWIW, that won't work correctly wrt failure exits. I'm digging
through the lifetime rules in there right now, will post when
I'm done.
Powered by blists - more mailing lists