[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <d99bae6c-ac53-d803-95d2-c2a7bcf0c89a@bytedance.com>
Date: Tue, 31 Jan 2023 10:35:19 +0800
From: Hao Jia <jiahao.os@...edance.com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: mingo@...hat.com, mingo@...nel.org, juri.lelli@...hat.com,
vincent.guittot@...aro.org, dietmar.eggemann@....com,
rostedt@...dmis.org, bsegall@...gle.com, mgorman@...e.de,
bristot@...hat.com, vschneid@...hat.com,
mgorman@...hsingularity.net, linux-kernel@...r.kernel.org
Subject: Re: [External] Re: [PATCH] sched/core: Avoid WARN_DOUBLE_CLOCK
warning when CONFIG_SCHED_CORE
On 2023/1/16 Peter Zijlstra wrote:
> On Tue, Dec 06, 2022 at 03:05:50PM +0800, Hao Jia wrote:
>> When we need to call update_rq_clock() to update the rq clock of
>> other CPUs on the same core, before that we need to clear RQCF_UPDATED
>> of rq->clock_update_flags to avoid the WARN_DOUBLE_CLOCK warning.
>> Because at this time the rq->clock_update_flags of other CPUs
>> may be RQCF_UPDATED.
>
> So you've found that the WARN_DOUBLE_CLOCK machinery doesn't work for
> core-sched -- but then instead of fixing that machinery, you put
> band-aids on it :/
>
>
Hi, Peter
Sorry for the late reply. I just finished my holiday.
I am trying to adapt WARN_DOUBLE_CLOCK machinery for core-sched.
If sched_core_enabled(), we will get a core wide rq->lock, so we can
safely clear RQCF_UPDATED of rq->clock_update_flags of all CPUs on this
core.
This avoids a WARN_DOUBLE_CLOCK warning when we call update_rq_clock()
to update the rq clock of other cpus on the same core.
We cannot clear rq->clock_update_flags of other cpus on the same core in
rq_pin_lock(). Because in some functions, we will temporarily give up
core wide rq->lock, and then use raw_spin_rq_lock() to obtain core wide
rq->lock, such as newidle_balance() and _double_lock_balance().
Thanks,
Hao
---
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index e838feb6adc5..f279912e30b3 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -435,6 +435,21 @@ sched_core_dequeue(struct rq *rq, struct
task_struct *p, int flags) { }
#endif /* CONFIG_SCHED_CORE */
+static inline void sched_core_rq_clock_clear_update(struct rq *rq)
+{
+#ifdef CONFIG_SCHED_DEBUG
+ const struct cpumask *smt_mask;
+ int i;
+ if (rq->core_enabled) {
+ smt_mask = cpu_smt_mask(rq->cpu);
+ for_each_cpu(i, smt_mask) {
+ if (rq->cpu != i)
+ cpu_rq(i)->clock_update_flags &=
(RQCF_REQ_SKIP|RQCF_ACT_SKIP);
+ }
+ }
+#endif
+}
+
/*
* Serialization rules:
*
@@ -546,6 +561,7 @@ void raw_spin_rq_lock_nested(struct rq *rq, int
subclass)
if (likely(lock == __rq_lockp(rq))) {
/* preempt_count *MUST* be > 1 */
preempt_enable_no_resched();
+ sched_core_rq_clock_clear_update(rq);
return;
}
raw_spin_unlock(lock);
Powered by blists - more mailing lists