[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <161503094191.398.6539724050553755259.tip-bot2@tip-bot2>
Date: Sat, 06 Mar 2021 11:42:21 -0000
From: "tip-bot2 for Vincent Guittot" <tip-bot2@...utronix.de>
To: linux-tip-commits@...r.kernel.org
Cc: Vincent Guittot <vincent.guittot@...aro.org>,
"Peter Zijlstra (Intel)" <peterz@...radead.org>,
Ingo Molnar <mingo@...nel.org>,
Valentin Schneider <valentin.schneider@....com>,
x86@...nel.org, linux-kernel@...r.kernel.org
Subject: [tip: sched/core] sched/fair: Reduce the window for duplicated update
The following commit has been merged into the sched/core branch of tip:
Commit-ID: 39b6a429c30482c349f1bb3746470fe473cbdb0f
Gitweb: https://git.kernel.org/tip/39b6a429c30482c349f1bb3746470fe473cbdb0f
Author: Vincent Guittot <vincent.guittot@...aro.org>
AuthorDate: Wed, 24 Feb 2021 14:30:07 +01:00
Committer: Ingo Molnar <mingo@...nel.org>
CommitterDate: Sat, 06 Mar 2021 12:40:22 +01:00
sched/fair: Reduce the window for duplicated update
Start to update last_blocked_load_update_tick to reduce the possibility
of another cpu starting the update one more time
Signed-off-by: Vincent Guittot <vincent.guittot@...aro.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
Signed-off-by: Ingo Molnar <mingo@...nel.org>
Reviewed-by: Valentin Schneider <valentin.schneider@....com>
Link: https://lkml.kernel.org/r/20210224133007.28644-8-vincent.guittot@linaro.org
---
kernel/sched/fair.c | 11 ++++++++---
1 file changed, 8 insertions(+), 3 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index e87e1b3..f1b55f9 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -7852,16 +7852,20 @@ static inline bool others_have_blocked(struct rq *rq)
return false;
}
-static inline void update_blocked_load_status(struct rq *rq, bool has_blocked)
+static inline void update_blocked_load_tick(struct rq *rq)
{
- rq->last_blocked_load_update_tick = jiffies;
+ WRITE_ONCE(rq->last_blocked_load_update_tick, jiffies);
+}
+static inline void update_blocked_load_status(struct rq *rq, bool has_blocked)
+{
if (!has_blocked)
rq->has_blocked_load = 0;
}
#else
static inline bool cfs_rq_has_blocked(struct cfs_rq *cfs_rq) { return false; }
static inline bool others_have_blocked(struct rq *rq) { return false; }
+static inline void update_blocked_load_tick(struct rq *rq) {}
static inline void update_blocked_load_status(struct rq *rq, bool has_blocked) {}
#endif
@@ -8022,6 +8026,7 @@ static void update_blocked_averages(int cpu)
struct rq_flags rf;
rq_lock_irqsave(rq, &rf);
+ update_blocked_load_tick(rq);
update_rq_clock(rq);
decayed |= __update_blocked_others(rq, &done);
@@ -8363,7 +8368,7 @@ static bool update_nohz_stats(struct rq *rq)
if (!cpumask_test_cpu(cpu, nohz.idle_cpus_mask))
return false;
- if (!time_after(jiffies, rq->last_blocked_load_update_tick))
+ if (!time_after(jiffies, READ_ONCE(rq->last_blocked_load_update_tick)))
return true;
update_blocked_averages(cpu);
Powered by blists - more mailing lists