[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250409131444.9K2lwziT@linutronix.de>
Date: Wed, 9 Apr 2025 15:14:44 +0200
From: Sebastian Andrzej Siewior <bigeasy@...utronix.de>
To: Christian Brauner <brauner@...nel.org>
Cc: Eric Chanudet <echanude@...hat.com>,
Alexander Viro <viro@...iv.linux.org.uk>, Jan Kara <jack@...e.cz>,
Clark Williams <clrkwllms@...nel.org>,
Steven Rostedt <rostedt@...dmis.org>, Ian Kent <ikent@...hat.com>,
linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org,
linux-rt-devel@...ts.linux.dev,
Alexander Larsson <alexl@...hat.com>,
Lucas Karpinski <lkarpins@...hat.com>
Subject: Re: [PATCH v4] fs/namespace: defer RCU sync for MNT_DETACH umount
On 2025-04-09 12:37:06 [+0200], Christian Brauner wrote:
> I still hate this with a passion because it adds another special-sauce
> path into the unlock path. I've folded the following diff into it so it
> at least doesn't start passing that pointless boolean and doesn't
> introduce __namespace_unlock(). Just use a global variable and pick the
> value off of it just as we do with the lists. Testing this now:
I tried to apply this on top of the previous one but it all chunks
failed.
One question: Do we need this lazy/ MNT_DETACH case? Couldn't we handle
them all via queue_rcu_work()?
If so, couldn't we have make deferred_free_mounts global and have two
release_list, say release_list and release_list_next_gp? The first one
will be used if queue_rcu_work() returns true, otherwise the second.
Then once defer_free_mounts() is done and release_list_next_gp not
empty, it would move release_list_next_gp -> release_list and invoke
queue_rcu_work().
This would avoid the kmalloc, synchronize_rcu_expedited() and the
special-sauce.
> diff --git a/fs/namespace.c b/fs/namespace.c
> index e5b0b920dd97..25599428706c 100644
> --- a/fs/namespace.c
> +++ b/fs/namespace.c
> @@ -1840,29 +1842,21 @@ static void __namespace_unlock(bool lazy)
…
> + d = kmalloc(sizeof(struct deferred_free_mounts), GFP_KERNEL);
> + if (d) {
> + hlist_move_list(&head, &d->release_list);
> + INIT_RCU_WORK(&d->rwork, defer_free_mounts);
> + queue_rcu_work(system_wq, &d->rwork);
Couldn't we do system_unbound_wq?
Sebastian
Powered by blists - more mailing lists