[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <168182960967.404.10362810597852537883.tip-bot2@tip-bot2>
Date: Tue, 18 Apr 2023 14:53:29 -0000
From: "tip-bot2 for Frederic Weisbecker" <tip-bot2@...utronix.de>
To: linux-tip-commits@...r.kernel.org
Cc: Yu Liao <liaoyu15@...wei.com>,
Frederic Weisbecker <frederic@...nel.org>,
Thomas Gleixner <tglx@...utronix.de>,
"Peter Zijlstra (Intel)" <peterz@...radead.org>, x86@...nel.org,
linux-kernel@...r.kernel.org
Subject: [tip: timers/core] timers/nohz: Protect idle/iowait sleep time under seqcount
The following commit has been merged into the timers/core branch of tip:
Commit-ID: 620a30fa0bd14878891b22bf2261e6ed4587c2b4
Gitweb: https://git.kernel.org/tip/620a30fa0bd14878891b22bf2261e6ed4587c2b4
Author: Frederic Weisbecker <frederic@...nel.org>
AuthorDate: Wed, 22 Feb 2023 15:46:44 +01:00
Committer: Thomas Gleixner <tglx@...utronix.de>
CommitterDate: Tue, 18 Apr 2023 16:35:12 +02:00
timers/nohz: Protect idle/iowait sleep time under seqcount
Reading idle/IO sleep time (eg: from /proc/stat) can race with idle exit
updates because the state machine handling the stats is not atomic and
requires a coherent read batch.
As a result reading the sleep time may report irrelevant or backward
values.
Fix this with protecting the simple state machine within a seqcount.
This is expected to be cheap enough not to add measurable performance
impact on the idle path.
Note this only fixes reader VS writer condition partitially. A race
remains that involves remote updates of the CPU iowait task counter. It
can hardly be fixed.
Reported-by: Yu Liao <liaoyu15@...wei.com>
Signed-off-by: Frederic Weisbecker <frederic@...nel.org>
Signed-off-by: Thomas Gleixner <tglx@...utronix.de>
Acked-by: Peter Zijlstra (Intel) <peterz@...radead.org>
Link: https://lore.kernel.org/r/20230222144649.624380-4-frederic@kernel.org
---
kernel/time/tick-sched.c | 22 ++++++++++++++++------
kernel/time/tick-sched.h | 1 +
2 files changed, 17 insertions(+), 6 deletions(-)
diff --git a/kernel/time/tick-sched.c b/kernel/time/tick-sched.c
index 9058b9e..90d9b7b 100644
--- a/kernel/time/tick-sched.c
+++ b/kernel/time/tick-sched.c
@@ -646,6 +646,7 @@ static void tick_nohz_stop_idle(struct tick_sched *ts, ktime_t now)
delta = ktime_sub(now, ts->idle_entrytime);
+ write_seqcount_begin(&ts->idle_sleeptime_seq);
if (nr_iowait_cpu(smp_processor_id()) > 0)
ts->iowait_sleeptime = ktime_add(ts->iowait_sleeptime, delta);
else
@@ -653,14 +654,18 @@ static void tick_nohz_stop_idle(struct tick_sched *ts, ktime_t now)
ts->idle_entrytime = now;
ts->idle_active = 0;
+ write_seqcount_end(&ts->idle_sleeptime_seq);
sched_clock_idle_wakeup_event();
}
static void tick_nohz_start_idle(struct tick_sched *ts)
{
+ write_seqcount_begin(&ts->idle_sleeptime_seq);
ts->idle_entrytime = ktime_get();
ts->idle_active = 1;
+ write_seqcount_end(&ts->idle_sleeptime_seq);
+
sched_clock_idle_sleep_event();
}
@@ -668,6 +673,7 @@ static u64 get_cpu_sleep_time_us(struct tick_sched *ts, ktime_t *sleeptime,
bool compute_delta, u64 *last_update_time)
{
ktime_t now, idle;
+ unsigned int seq;
if (!tick_nohz_active)
return -1;
@@ -676,13 +682,17 @@ static u64 get_cpu_sleep_time_us(struct tick_sched *ts, ktime_t *sleeptime,
if (last_update_time)
*last_update_time = ktime_to_us(now);
- if (ts->idle_active && compute_delta) {
- ktime_t delta = ktime_sub(now, ts->idle_entrytime);
+ do {
+ seq = read_seqcount_begin(&ts->idle_sleeptime_seq);
- idle = ktime_add(*sleeptime, delta);
- } else {
- idle = *sleeptime;
- }
+ if (ts->idle_active && compute_delta) {
+ ktime_t delta = ktime_sub(now, ts->idle_entrytime);
+
+ idle = ktime_add(*sleeptime, delta);
+ } else {
+ idle = *sleeptime;
+ }
+ } while (read_seqcount_retry(&ts->idle_sleeptime_seq, seq));
return ktime_to_us(idle);
diff --git a/kernel/time/tick-sched.h b/kernel/time/tick-sched.h
index c666325..5ed5a9d 100644
--- a/kernel/time/tick-sched.h
+++ b/kernel/time/tick-sched.h
@@ -75,6 +75,7 @@ struct tick_sched {
ktime_t idle_waketime;
/* Idle entry */
+ seqcount_t idle_sleeptime_seq;
ktime_t idle_entrytime;
/* Tick stop */
Powered by blists - more mailing lists