lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <174483099840.31282.17802635645325940436.tip-bot2@tip-bot2>
Date: Wed, 16 Apr 2025 19:16:38 -0000
From: "tip-bot2 for K Prateek Nayak" <tip-bot2@...utronix.de>
To: linux-tip-commits@...r.kernel.org
Cc: K Prateek Nayak <kprateek.nayak@....com>,
 "Peter Zijlstra (Intel)" <peterz@...radead.org>, x86@...nel.org,
 linux-kernel@...r.kernel.org
Subject:
 [tip: sched/core] sched/fair: Use READ_ONCE() to read sg->asym_prefer_cpu

The following commit has been merged into the sched/core branch of tip:

Commit-ID:     872aa4de18889be63317a8c0f2de71a3a01e487c
Gitweb:        https://git.kernel.org/tip/872aa4de18889be63317a8c0f2de71a3a01e487c
Author:        K Prateek Nayak <kprateek.nayak@....com>
AuthorDate:    Wed, 09 Apr 2025 05:34:43 
Committer:     Peter Zijlstra <peterz@...radead.org>
CommitterDate: Wed, 16 Apr 2025 21:09:11 +02:00

sched/fair: Use READ_ONCE() to read sg->asym_prefer_cpu

Subsequent commits add the support to dynamically update the sched_group
struct's "asym_prefer_cpu" member from a remote CPU. Use READ_ONCE()
when reading the "sg->asym_prefer_cpu" to ensure load balancer always
reads the latest value.

Signed-off-by: K Prateek Nayak <kprateek.nayak@....com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
Link: https://lore.kernel.org/r/20250409053446.23367-2-kprateek.nayak@amd.com
---
 kernel/sched/fair.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 0c19459..5e1bd9e 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -10251,7 +10251,7 @@ sched_group_asym(struct lb_env *env, struct sg_lb_stats *sgs, struct sched_group
 	    (sgs->group_weight - sgs->idle_cpus != 1))
 		return false;
 
-	return sched_asym(env->sd, env->dst_cpu, group->asym_prefer_cpu);
+	return sched_asym(env->sd, env->dst_cpu, READ_ONCE(group->asym_prefer_cpu));
 }
 
 /* One group has more than one SMT CPU while the other group does not */
@@ -10488,7 +10488,8 @@ static bool update_sd_pick_busiest(struct lb_env *env,
 
 	case group_asym_packing:
 		/* Prefer to move from lowest priority CPU's work */
-		return sched_asym_prefer(sds->busiest->asym_prefer_cpu, sg->asym_prefer_cpu);
+		return sched_asym_prefer(READ_ONCE(sds->busiest->asym_prefer_cpu),
+					 READ_ONCE(sg->asym_prefer_cpu));
 
 	case group_misfit_task:
 		/*

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ