[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZfHDwPkPHulJHrD0@localhost.localdomain>
Date: Wed, 13 Mar 2024 16:18:24 +0100
From: Frederic Weisbecker <frederic@...nel.org>
To: Neeraj Upadhyay <Neeraj.Upadhyay@....com>
Cc: paulmck@...nel.org, joel@...lfernandes.org, josh@...htriplett.org,
boqun.feng@...il.com, rostedt@...dmis.org,
mathieu.desnoyers@...icios.com, jiangshanlai@...il.com,
qiang.zhang1211@...il.com, rcu@...r.kernel.org,
linux-kernel@...r.kernel.org, neeraj.upadhyay@...nel.org
Subject: Re: [PATCH] rcu: Reduce synchronize_rcu() delays when all wait heads
are in use
Le Wed, Mar 13, 2024 at 02:02:28PM +0530, Neeraj Upadhyay a écrit :
> When all wait heads are in use, which can happen when
> rcu_sr_normal_gp_cleanup_work()'s callback processing
> is slow, any new synchronize_rcu() user's rcu_synchronize
> node's processing is deferred to future GP periods. This
> can result in long list of synchronize_rcu() invocations
> waiting for full grace period processing, which can delay
> freeing of memory. Mitigate this problem by using first
> node in the list as wait tail when all wait heads are in use.
> While methods to speed up callback processing would be needed
> to recover from this situation, allowing new nodes to complete
> their grace period can help prevent delays due to a fixed
> number of wait head nodes.
>
> Signed-off-by: Neeraj Upadhyay <Neeraj.Upadhyay@....com>
> ---
> kernel/rcu/tree.c | 27 +++++++++++++--------------
> 1 file changed, 13 insertions(+), 14 deletions(-)
>
> diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
> index 9fbb5ab57c84..bdccce1ed62f 100644
> --- a/kernel/rcu/tree.c
> +++ b/kernel/rcu/tree.c
> @@ -1470,14 +1470,11 @@ static void rcu_poll_gp_seq_end_unlocked(unsigned long *snap)
> * for this new grace period. Given that there are a fixed
> * number of wait nodes, if all wait nodes are in use
> * (which can happen when kworker callback processing
> - * is delayed) and additional grace period is requested.
> - * This means, a system is slow in processing callbacks.
> - *
> - * TODO: If a slow processing is detected, a first node
> - * in the llist should be used as a wait-tail for this
> - * grace period, therefore users which should wait due
> - * to a slow process are handled by _this_ grace period
> - * and not next.
> + * is delayed), first node in the llist is used as wait
> + * tail for this grace period. This means, the first node
> + * has to go through additional grace periods before it is
> + * part of the wait callbacks. This should be ok, as
> + * the system is slow in processing callbacks anyway.
> *
> * Below is an illustration of how the done and wait
> * tail pointers move from one set of rcu_synchronize nodes
> @@ -1725,15 +1722,17 @@ static bool rcu_sr_normal_gp_init(void)
> return start_new_poll;
>
> wait_head = rcu_sr_get_wait_head();
> - if (!wait_head) {
> - // Kick another GP to retry.
> + if (wait_head) {
> + /* Inject a wait-dummy-node. */
> + llist_add(wait_head, &rcu_state.srs_next);
> + } else {
> + // Kick another GP for first node.
> start_new_poll = true;
> - return start_new_poll;
> + if (first == rcu_state.srs_done_tail)
> + return start_new_poll;
> + wait_head = first;
This means you're setting a non-wait-head as srs_wait_tail, right?
Doesn't it trigger the following warning in rcu_sr_normal_gp_cleanup():
WARN_ON_ONCE(!rcu_sr_is_wait_head(wait_tail));
Also there is a risk that this non-wait-head gets later assigned as
rcu_state.srs_done_tail. And then this pending sr may not be completed
until the next grace period calling rcu_sr_normal_gp_cleanup()? (Because
the work doesn't take care of rcu_state.srs_done_tail itself). And then
the delay can be arbitrary.
And the next grace period completing this sr (that non-wait-head set
as rcu_state.srs_done_tail) and thus allowing its caller to wipe it out
of its stack may race with the cleanup work dereferencing it?
Thanks.
> }
>
> - /* Inject a wait-dummy-node. */
> - llist_add(wait_head, &rcu_state.srs_next);
> -
> /*
> * A waiting list of rcu_synchronize nodes should be empty on
> * this step, since a GP-kthread, rcu_gp_init() -> gp_cleanup(),
> --
> 2.34.1
>
>
Powered by blists - more mailing lists