[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <601ecb12-ae2e-9608-7127-c2cddc8038a6@quicinc.com>
Date: Mon, 13 Dec 2021 14:25:30 +0530
From: Neeraj Upadhyay <quic_neeraju@...cinc.com>
To: David Woodhouse <dwmw2@...radead.org>, <paulmck@...nel.org>,
<frederic@...nel.org>, <josh@...htriplett.org>,
<rostedt@...dmis.org>, <mathieu.desnoyers@...icios.com>,
<jiangshanlai@...il.com>, <joel@...lfernandes.org>
CC: <rcu@...r.kernel.org>, <linux-kernel@...r.kernel.org>,
<urezki@...il.com>, <boqun.feng@...il.com>
Subject: Re: [PATCH v2] rcu/nocb: Handle concurrent nocb kthreads creation
Hi David,
Thanks for the review; some replies inline.
On 12/13/2021 1:48 PM, David Woodhouse wrote:
> On Sat, 2021-12-11 at 22:31 +0530, Neeraj Upadhyay wrote:
>> When multiple CPUs in the same nocb gp/cb group concurrently
>> come online, they might try to concurrently create the same
>> rcuog kthread. Fix this by using nocb gp CPU's spawn mutex to
>> provide mutual exclusion for the rcuog kthread creation code.
>>
>> Signed-off-by: Neeraj Upadhyay <quic_neeraju@...cinc.com>
>> ---
>> Change in v2:
>> Fix missing mutex_unlock in nocb gp kthread creation err path.
>
> I think this ends up being not strictly necessary in the short term too
> because we aren't currently planning to run rcutree_prepare_cpu()
> concurrently anyway. But harmless and worth fixing in the longer term.
>
> Although, if I've already added a mutex for adding the boost thread,
> could we manage to use the *same* mutex instead of adding another one?
>
Let me think about it; the nocb-gp and nocb-cb kthreads are grouped
based on rcu_nocb_gp_stride; whereas, boost kthreads are per rnp. So, I
need to see how we can use a common mutex for both.
> Acked-by: David Woodhouse <dwmw@...zon.co.uk>
> + mutex_unlock(&rdp_gp->nocb_gp_kthread_mutex);
>> return;
>> + }
>> WRITE_ONCE(rdp_gp->nocb_gp_kthread, t);
>> }
>> + mutex_unlock(&rdp_gp->nocb_gp_kthread_mutex);
>>
>> /* Spawn the kthread for this CPU. */
>
> Some whitespace damage there.
Will fix in next version.
Thanks
Neeraj
>
Powered by blists - more mailing lists