lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20251127154725.647502625@infradead.org>
Date: Thu, 27 Nov 2025 16:39:46 +0100
From: Peter Zijlstra <peterz@...radead.org>
To: mingo@...nel.org,
 vincent.guittot@...aro.org
Cc: linux-kernel@...r.kernel.org,
 peterz@...radead.org,
 juri.lelli@...hat.com,
 dietmar.eggemann@....com,
 rostedt@...dmis.org,
 bsegall@...gle.com,
 mgorman@...e.de,
 vschneid@...hat.com,
 tj@...nel.org,
 void@...ifault.com,
 arighi@...dia.com,
 changwoo@...lia.com,
 sched-ext@...ts.linux.dev
Subject: [PATCH 3/5] sched: Change rcu_dereference_check_sched_domain() to rcu-sched

By changing rcu_dereference_check_sched_domain() to use
rcu_dereference_sched_check() it also considers preempt_disable() to
be equivalent to rcu_read_lock().

Since rcu fully implies rcu_sched this has absolutely no change in
behaviour, but it does allow removing a bunch of otherwise redundant
rcu_read_lock() noise.

Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
---
 kernel/sched/fair.c  |    9 +--------
 kernel/sched/sched.h |    2 +-
 2 files changed, 2 insertions(+), 9 deletions(-)

--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -12853,21 +12853,16 @@ static int sched_balance_newidle(struct
 	 */
 	rq_unpin_lock(this_rq, rf);
 
-	rcu_read_lock();
 	sd = rcu_dereference_check_sched_domain(this_rq->sd);
-	if (!sd) {
-		rcu_read_unlock();
+	if (!sd)
 		goto out;
-	}
 
 	if (!get_rd_overloaded(this_rq->rd) ||
 	    this_rq->avg_idle < sd->max_newidle_lb_cost) {
 
 		update_next_balance(sd, &next_balance);
-		rcu_read_unlock();
 		goto out;
 	}
-	rcu_read_unlock();
 
 	/*
 	 * Include sched_balance_update_blocked_averages() in the cost
@@ -12880,7 +12875,6 @@ static int sched_balance_newidle(struct
 	rq_modified_clear(this_rq);
 	raw_spin_rq_unlock(this_rq);
 
-	rcu_read_lock();
 	for_each_domain(this_cpu, sd) {
 		u64 domain_cost;
 
@@ -12930,7 +12924,6 @@ static int sched_balance_newidle(struct
 		if (pulled_task || !continue_balancing)
 			break;
 	}
-	rcu_read_unlock();
 
 	raw_spin_rq_lock(this_rq);
 
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -2009,7 +2009,7 @@ queue_balance_callback(struct rq *rq,
 }
 
 #define rcu_dereference_check_sched_domain(p) \
-	rcu_dereference_check((p), lockdep_is_held(&sched_domains_mutex))
+	rcu_dereference_sched_check((p), lockdep_is_held(&sched_domains_mutex))
 
 /*
  * The domain tree (rq->sd) is protected by RCU's quiescent state transition.



Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ