[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20220525221055.1152307-3-frederic@kernel.org>
Date: Thu, 26 May 2022 00:10:53 +0200
From: Frederic Weisbecker <frederic@...nel.org>
To: LKML <linux-kernel@...r.kernel.org>
Cc: Frederic Weisbecker <frederic@...nel.org>,
Tejun Heo <tj@...nel.org>,
Peter Zijlstra <peterz@...radead.org>,
"Paul E . McKenney" <paulmck@...nel.org>,
Paul Gortmaker <paul.gortmaker@...driver.com>,
Johannes Weiner <hannes@...xchg.org>,
Marcelo Tosatti <mtosatti@...hat.com>,
Phil Auld <pauld@...hat.com>,
Zefan Li <lizefan.x@...edance.com>,
Waiman Long <longman@...hat.com>,
Daniel Bristot de Oliveira <bristot@...nel.org>,
Nicolas Saenz Julienne <nsaenz@...nel.org>,
rcu@...r.kernel.org
Subject: [PATCH 2/4] rcu/nocb: Prepare to change nocb cpumask from CPU-hotplug protected cpuset caller
cpusets is going to use the NOCB (de-)offloading interface while
holding hotplug lock. Therefore pull out the responsibility of protecting
against concurrent CPU-hotplug changes to the callers of
rcu_nocb_cpumask_update().
Signed-off-by: Frederic Weisbecker <frederic@...nel.org>
Cc: Zefan Li <lizefan.x@...edance.com>
Cc: Tejun Heo <tj@...nel.org>
Cc: Johannes Weiner <hannes@...xchg.org>
Cc: Paul E. McKenney <paulmck@...nel.org>
Cc: Phil Auld <pauld@...hat.com>
Cc: Nicolas Saenz Julienne <nsaenz@...nel.org>
Cc: Marcelo Tosatti <mtosatti@...hat.com>
Cc: Paul Gortmaker <paul.gortmaker@...driver.com>
Cc: Waiman Long <longman@...hat.com>
Cc: Daniel Bristot de Oliveira <bristot@...nel.org>
Cc: Peter Zijlstra <peterz@...radead.org>
---
kernel/rcu/rcutorture.c | 2 ++
kernel/rcu/tree_nocb.h | 4 ++--
2 files changed, 4 insertions(+), 2 deletions(-)
diff --git a/kernel/rcu/rcutorture.c b/kernel/rcu/rcutorture.c
index f912ff4869b3..5a3029550e83 100644
--- a/kernel/rcu/rcutorture.c
+++ b/kernel/rcu/rcutorture.c
@@ -1886,6 +1886,7 @@ static int rcu_nocb_toggle(void *arg)
do {
r = torture_random(&rand);
cpu = (r >> 4) % (maxcpu + 1);
+ cpus_read_lock();
if (r & 0x1) {
rcu_nocb_cpumask_update(cpumask_of(cpu), true);
atomic_long_inc(&n_nocb_offload);
@@ -1893,6 +1894,7 @@ static int rcu_nocb_toggle(void *arg)
rcu_nocb_cpumask_update(cpumask_of(cpu), false);
atomic_long_inc(&n_nocb_deoffload);
}
+ cpus_read_unlock();
toggle_delay = torture_random(&rand) % toggle_fuzz + toggle_interval;
set_current_state(TASK_INTERRUPTIBLE);
schedule_hrtimeout(&toggle_delay, HRTIMER_MODE_REL);
diff --git a/kernel/rcu/tree_nocb.h b/kernel/rcu/tree_nocb.h
index 428571ad11e3..6396af6c765a 100644
--- a/kernel/rcu/tree_nocb.h
+++ b/kernel/rcu/tree_nocb.h
@@ -1182,12 +1182,13 @@ int rcu_nocb_cpumask_update(struct cpumask *cpumask, bool offload)
int err_cpu;
cpumask_var_t saved_nocb_mask;
+ lockdep_assert_cpus_held();
+
if (!alloc_cpumask_var(&saved_nocb_mask, GFP_KERNEL))
return -ENOMEM;
cpumask_copy(saved_nocb_mask, rcu_nocb_mask);
- cpus_read_lock();
mutex_lock(&rcu_state.barrier_mutex);
for_each_cpu(cpu, cpumask) {
if (offload) {
@@ -1221,7 +1222,6 @@ int rcu_nocb_cpumask_update(struct cpumask *cpumask, bool offload)
}
mutex_unlock(&rcu_state.barrier_mutex);
- cpus_read_unlock();
free_cpumask_var(saved_nocb_mask);
--
2.25.1
Powered by blists - more mailing lists