[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20250916044735.2316171-10-dolinux.peng@gmail.com>
Date: Tue, 16 Sep 2025 12:47:30 +0800
From: pengdonglin <dolinux.peng@...il.com>
To: tj@...nel.org,
tony.luck@...el.com,
jani.nikula@...ux.intel.com,
ap420073@...il.com,
jv@...sburgh.net,
freude@...ux.ibm.com,
bcrl@...ck.org,
trondmy@...nel.org,
longman@...hat.com,
kees@...nel.org
Cc: bigeasy@...utronix.de,
hdanton@...a.com,
paulmck@...nel.org,
linux-kernel@...r.kernel.org,
linux-rt-devel@...ts.linux.dev,
linux-nfs@...r.kernel.org,
linux-aio@...ck.org,
linux-fsdevel@...r.kernel.org,
linux-security-module@...r.kernel.org,
netdev@...r.kernel.org,
intel-gfx@...ts.freedesktop.org,
linux-wireless@...r.kernel.org,
linux-acpi@...r.kernel.org,
linux-s390@...r.kernel.org,
cgroups@...r.kernel.org,
pengdonglin <dolinux.peng@...il.com>,
Johannes Weiner <hannes@...xchg.org>,
pengdonglin <pengdonglin@...omi.com>
Subject: [PATCH v3 09/14] cgroup/cpuset: Remove redundant rcu_read_lock/unlock() in spin_lock
From: pengdonglin <pengdonglin@...omi.com>
Since commit a8bb74acd8efe ("rcu: Consolidate RCU-sched update-side function definitions")
there is no difference between rcu_read_lock(), rcu_read_lock_bh() and
rcu_read_lock_sched() in terms of RCU read section and the relevant grace
period. That means that spin_lock(), which implies rcu_read_lock_sched(),
also implies rcu_read_lock().
There is no need no explicitly start a RCU read section if one has already
been started implicitly by spin_lock().
Simplify the code and remove the inner rcu_read_lock() invocation.
Cc: Waiman Long <longman@...hat.com>
Cc: Johannes Weiner <hannes@...xchg.org>
Acked-by: Waiman Long <longman@...hat.com>
Signed-off-by: pengdonglin <pengdonglin@...omi.com>
Signed-off-by: pengdonglin <dolinux.peng@...il.com>
---
kernel/cgroup/cpuset.c | 6 ------
1 file changed, 6 deletions(-)
diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
index 27adb04df675..9b7e8e8e9411 100644
--- a/kernel/cgroup/cpuset.c
+++ b/kernel/cgroup/cpuset.c
@@ -4073,7 +4073,6 @@ void cpuset_cpus_allowed(struct task_struct *tsk, struct cpumask *pmask)
struct cpuset *cs;
spin_lock_irqsave(&callback_lock, flags);
- rcu_read_lock();
cs = task_cs(tsk);
if (cs != &top_cpuset)
@@ -4095,7 +4094,6 @@ void cpuset_cpus_allowed(struct task_struct *tsk, struct cpumask *pmask)
cpumask_copy(pmask, possible_mask);
}
- rcu_read_unlock();
spin_unlock_irqrestore(&callback_lock, flags);
}
@@ -4168,9 +4166,7 @@ nodemask_t cpuset_mems_allowed(struct task_struct *tsk)
unsigned long flags;
spin_lock_irqsave(&callback_lock, flags);
- rcu_read_lock();
guarantee_online_mems(task_cs(tsk), &mask);
- rcu_read_unlock();
spin_unlock_irqrestore(&callback_lock, flags);
return mask;
@@ -4265,10 +4261,8 @@ bool cpuset_current_node_allowed(int node, gfp_t gfp_mask)
/* Not hardwall and node outside mems_allowed: scan up cpusets */
spin_lock_irqsave(&callback_lock, flags);
- rcu_read_lock();
cs = nearest_hardwall_ancestor(task_cs(current));
allowed = node_isset(node, cs->mems_allowed);
- rcu_read_unlock();
spin_unlock_irqrestore(&callback_lock, flags);
return allowed;
--
2.34.1
Powered by blists - more mailing lists