[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <6c1ac571-b758-4168-a992-3704c60dba61@amd.com>
Date: Wed, 13 Mar 2024 21:34:05 +0530
From: Neeraj Upadhyay <Neeraj.Upadhyay@....com>
To: Joel Fernandes <joel@...lfernandes.org>, paulmck@...nel.org,
frederic@...nel.org, josh@...htriplett.org, boqun.feng@...il.com,
rostedt@...dmis.org, mathieu.desnoyers@...icios.com, jiangshanlai@...il.com,
qiang.zhang1211@...il.com
Cc: rcu@...r.kernel.org, linux-kernel@...r.kernel.org,
neeraj.upadhyay@...nel.org
Subject: Re: [PATCH] rcu: Reduce synchronize_rcu() delays when all wait heads
are in use
Hi Joel,
On 3/13/2024 8:10 PM, Joel Fernandes wrote:
> Hi Neeraj,
>
> On 3/13/2024 4:32 AM, Neeraj Upadhyay wrote:
>> When all wait heads are in use, which can happen when
>> rcu_sr_normal_gp_cleanup_work()'s callback processing
>> is slow, any new synchronize_rcu() user's rcu_synchronize
>> node's processing is deferred to future GP periods. This
>> can result in long list of synchronize_rcu() invocations
>> waiting for full grace period processing, which can delay
>> freeing of memory. Mitigate this problem by using first
>> node in the list as wait tail when all wait heads are in use.
>> While methods to speed up callback processing would be needed
>> to recover from this situation, allowing new nodes to complete
>> their grace period can help prevent delays due to a fixed
>> number of wait head nodes.
>>
>> Signed-off-by: Neeraj Upadhyay <Neeraj.Upadhyay@....com>
>> ---
>> kernel/rcu/tree.c | 27 +++++++++++++--------------
>> 1 file changed, 13 insertions(+), 14 deletions(-)
>>
>> diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
>> index 9fbb5ab57c84..bdccce1ed62f 100644
>> --- a/kernel/rcu/tree.c
>> +++ b/kernel/rcu/tree.c
>> @@ -1470,14 +1470,11 @@ static void rcu_poll_gp_seq_end_unlocked(unsigned long *snap)
>> * for this new grace period. Given that there are a fixed
>> * number of wait nodes, if all wait nodes are in use
>> * (which can happen when kworker callback processing
>> - * is delayed) and additional grace period is requested.
>> - * This means, a system is slow in processing callbacks.
>> - *
>> - * TODO: If a slow processing is detected, a first node
>> - * in the llist should be used as a wait-tail for this
>> - * grace period, therefore users which should wait due
>> - * to a slow process are handled by _this_ grace period
>> - * and not next.
>> + * is delayed), first node in the llist is used as wait
>> + * tail for this grace period. This means, the first node
>> + * has to go through additional grace periods before it is
>> + * part of the wait callbacks. This should be ok, as
>> + * the system is slow in processing callbacks anyway.
>> *
>> * Below is an illustration of how the done and wait
>> * tail pointers move from one set of rcu_synchronize nodes
>> @@ -1725,15 +1722,17 @@ static bool rcu_sr_normal_gp_init(void)
>> return start_new_poll;
>>
>> wait_head = rcu_sr_get_wait_head();
>> - if (!wait_head) {
>> - // Kick another GP to retry.
>> + if (wait_head) {
>> + /* Inject a wait-dummy-node. */
>> + llist_add(wait_head, &rcu_state.srs_next);
>> + } else {
>> + // Kick another GP for first node.
>> start_new_poll = true;
>> - return start_new_poll;
>> + if (first == rcu_state.srs_done_tail)
>
> small nit:
> Does done_tail access here need smp_load_acquire() or READ_ONCE() to match the
> other users?
>
As srs_done_tail is only updated in RCU GP thread context, I think it is not required.
Please correct me if I am wrong here.
> Also if you don't mind could you please rebase your patch on top of mine [1] ? I
> think it will otherwise trigger this warning in my patch:
Sure!
Thanks
Neeraj
>
> WARN_ON_ONCE(!rcu);
>
> Because I always assume there to be at least 2 wait heads at clean up time.
>
> [1] https://lore.kernel.org/all/20240308224439.281349-1-joel@joelfernandes.org/
>
> Thanks!
>
> - Joel
>
>
>> + return start_new_poll;
>> + wait_head = first;
>> }
>>
>> - /* Inject a wait-dummy-node. */
>> - llist_add(wait_head, &rcu_state.srs_next);
>> -
>> /*
>> * A waiting list of rcu_synchronize nodes should be empty on
>> * this step, since a GP-kthread, rcu_gp_init() -> gp_cleanup(),
Powered by blists - more mailing lists