[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5fff0b52-ae83-52ea-b606-1aa73def05aa@quicinc.com>
Date: Tue, 19 Jul 2022 14:58:04 +0530
From: Neeraj Upadhyay <quic_neeraju@...cinc.com>
To: "Paul E. McKenney" <paulmck@...nel.org>, <rcu@...r.kernel.org>
CC: <linux-kernel@...r.kernel.org>, <kernel-team@...com>,
<rostedt@...dmis.org>, Zqiang <qiang1.zhang@...el.com>,
Boqun Feng <boqun.feng@...il.com>,
Uladzislau Rezki <uladzislau.rezki@...y.com>,
Joel Fernandes <joel@...lfernandes.org>,
Frederic Weisbecker <frederic@...nel.org>
Subject: Re: [PATCH rcu 2/7] rcu/nocb: Invert rcu_state.barrier_mutex VS
hotplug lock locking order
On 6/21/2022 4:14 AM, Paul E. McKenney wrote:
> From: Zqiang <qiang1.zhang@...el.com>
>
> In case of failure to spawn either rcuog or rcuo[p] kthreads for a given
> rdp, rcu_nocb_rdp_deoffload() needs to be called with the hotplug
> lock and the barrier_mutex held. However cpus write lock is already held
> while calling rcutree_prepare_cpu(). It's not possible to call
> rcu_nocb_rdp_deoffload() from there with just locking the barrier_mutex
> or this would result in a locking inversion against
> rcu_nocb_cpu_deoffload() which holds both locks in the reverse order.
>
> Simply solve this with inverting the locking order inside
> rcu_nocb_cpu_[de]offload(). This will be a pre-requisite to toggle NOCB
> states toward cpusets anyway.
>
> Signed-off-by: Zqiang <qiang1.zhang@...el.com>
> Cc: Neeraj Upadhyay <quic_neeraju@...cinc.com>
> Cc: Boqun Feng <boqun.feng@...il.com>
> Cc: Uladzislau Rezki <uladzislau.rezki@...y.com>
> Cc: Joel Fernandes <joel@...lfernandes.org>
> Signed-off-by: Frederic Weisbecker <frederic@...nel.org>
> Signed-off-by: Paul E. McKenney <paulmck@...nel.org>
> ---
Reviewed-by: Neeraj Upadhyay <quic_neeraju@...cinc.com>
Thanks
Neeraj
> kernel/rcu/tree_nocb.h | 8 ++++----
> 1 file changed, 4 insertions(+), 4 deletions(-)
>
> diff --git a/kernel/rcu/tree_nocb.h b/kernel/rcu/tree_nocb.h
> index dac74952e1d1b..f2f2cab6285a1 100644
> --- a/kernel/rcu/tree_nocb.h
> +++ b/kernel/rcu/tree_nocb.h
> @@ -1055,8 +1055,8 @@ int rcu_nocb_cpu_deoffload(int cpu)
> struct rcu_data *rdp = per_cpu_ptr(&rcu_data, cpu);
> int ret = 0;
>
> - mutex_lock(&rcu_state.barrier_mutex);
> cpus_read_lock();
> + mutex_lock(&rcu_state.barrier_mutex);
> if (rcu_rdp_is_offloaded(rdp)) {
> if (cpu_online(cpu)) {
> ret = work_on_cpu(cpu, rcu_nocb_rdp_deoffload, rdp);
> @@ -1067,8 +1067,8 @@ int rcu_nocb_cpu_deoffload(int cpu)
> ret = -EINVAL;
> }
> }
> - cpus_read_unlock();
> mutex_unlock(&rcu_state.barrier_mutex);
> + cpus_read_unlock();
>
> return ret;
> }
> @@ -1134,8 +1134,8 @@ int rcu_nocb_cpu_offload(int cpu)
> struct rcu_data *rdp = per_cpu_ptr(&rcu_data, cpu);
> int ret = 0;
>
> - mutex_lock(&rcu_state.barrier_mutex);
> cpus_read_lock();
> + mutex_lock(&rcu_state.barrier_mutex);
> if (!rcu_rdp_is_offloaded(rdp)) {
> if (cpu_online(cpu)) {
> ret = work_on_cpu(cpu, rcu_nocb_rdp_offload, rdp);
> @@ -1146,8 +1146,8 @@ int rcu_nocb_cpu_offload(int cpu)
> ret = -EINVAL;
> }
> }
> - cpus_read_unlock();
> mutex_unlock(&rcu_state.barrier_mutex);
> + cpus_read_unlock();
>
> return ret;
> }
Powered by blists - more mailing lists