[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZeZRk-1Kx-s0Nz34@pavilion.home>
Date: Mon, 4 Mar 2024 23:56:19 +0100
From: Frederic Weisbecker <frederic@...nel.org>
To: Uladzislau Rezki <urezki@...il.com>
Cc: "Paul E . McKenney" <paulmck@...nel.org>, RCU <rcu@...r.kernel.org>,
Neeraj upadhyay <Neeraj.Upadhyay@....com>,
Boqun Feng <boqun.feng@...il.com>, Hillf Danton <hdanton@...a.com>,
Joel Fernandes <joel@...lfernandes.org>,
LKML <linux-kernel@...r.kernel.org>,
Oleksiy Avramchenko <oleksiy.avramchenko@...y.com>
Subject: Re: [PATCH v5 2/4] rcu: Reduce synchronize_rcu() latency
Le Mon, Mar 04, 2024 at 05:23:13PM +0100, Uladzislau Rezki a écrit :
> On Mon, Mar 04, 2024 at 12:55:47PM +0100, Frederic Weisbecker wrote:
> The easiest way is to drop the patch. To address it we can go with:
>
> <snip>
> diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
> index 31f3a61f9c38..9aa2cd46583e 100644
> --- a/kernel/rcu/tree.c
> +++ b/kernel/rcu/tree.c
> @@ -1661,16 +1661,8 @@ static void rcu_sr_normal_gp_cleanup(void)
> * wait-head is released if last. The worker is not kicked.
> */
> llist_for_each_safe(rcu, next, wait_tail->next) {
> - if (rcu_sr_is_wait_head(rcu)) {
> - if (!rcu->next) {
> - rcu_sr_put_wait_head(rcu);
> - wait_tail->next = NULL;
> - } else {
> - wait_tail->next = rcu;
> - }
> -
> + if (rcu_sr_is_wait_head(rcu))
> break;
> - }
>
> rcu_sr_normal_complete(rcu);
> // It can be last, update a next on this step.
> <snip>
>
> i.e. the process of users from GP is still there. The work is triggered
> to perform a final complete(if there are users) + releasing wait-heads
> so we do not race anymore.
It's worth mentioning that this doesn't avoid scheduling the workqueue.
Except perhaps for the very first time rcu_sr_normal_gp_cleanup() is called,
the workqueue will always have to be scheduled at least in order to release the
wait_tail of the previous rcu_sr_normal_gp_cleanup() call.
But indeed you keep the optimization that performs the completions themselves
synchronously from the GP kthread if there aren't too many of them (which
probably is the case most of the time).
> I am OK with both cases. Dropping the patch will make it more simple
> for sure.
I am ok with both cases as well :-)
You choose. But note that the time spent doing the completions from the GP
kthread may come at the expense of delaying the start of the next grace period,
on which further synchronous RCU calls may in turn depend on...
Thanks.
>
> --
> Uladzislau Rezki
>
>
Powered by blists - more mailing lists