[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Mon, 1 Jan 2024 21:16:24 +0530
From: Shrikanth Hegde <sshegde@...ux.vnet.ibm.com>
To: mingo@...nel.org, peterz@...radead.org, vincent.guittot@...aro.org
Cc: sshegde@...ux.vnet.ibm.com, dietmar.eggemann@....com,
linux-kernel@...r.kernel.org, srikar@...ux.vnet.ibm.com,
yu.c.chen@...el.com, tim.c.chen@...ux.intel.com
Subject: [PATCH v2 2/2] sched: add READ_ONCE and use existing helper function to access ->avg_irq
Use existing helper function cpu_util_irq instead of referencing it
directly.
It was noted that avg_irq could be updated by different CPU than the one
which is trying to access it. avg_irq is updated with WRITE_ONCE. Use
READ_ONCE to access it in order to avoid any compiler optimizations.
Signed-off-by: Shrikanth Hegde <sshegde@...ux.vnet.ibm.com>
---
kernel/sched/fair.c | 4 +---
kernel/sched/sched.h | 2 +-
2 files changed, 2 insertions(+), 4 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 1aeca3f943a8..02631060ca7e 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -9221,10 +9221,8 @@ static inline bool others_have_blocked(struct rq *rq)
if (thermal_load_avg(rq))
return true;
-#ifdef CONFIG_HAVE_SCHED_AVG_IRQ
- if (READ_ONCE(rq->avg_irq.util_avg))
+ if (cpu_util_irq(rq))
return true;
-#endif
return false;
}
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index e58a54bda77d..edc20c5cc7ce 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -3125,7 +3125,7 @@ static inline bool uclamp_rq_is_idle(struct rq *rq)
#ifdef CONFIG_HAVE_SCHED_AVG_IRQ
static inline unsigned long cpu_util_irq(struct rq *rq)
{
- return rq->avg_irq.util_avg;
+ return READ_ONCE(rq->avg_irq.util_avg);
}
static inline
--
2.39.3
Powered by blists - more mailing lists