[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <158460766596.28353.12288239928107345497.tip-bot2@tip-bot2>
Date: Thu, 19 Mar 2020 08:47:45 -0000
From: "tip-bot2 for Ahmed S. Darwish" <tip-bot2@...utronix.de>
To: linux-tip-commits@...r.kernel.org
Cc: "Ahmed S. Darwish" <a.darwish@...utronix.de>,
Thomas Gleixner <tglx@...utronix.de>, x86 <x86@...nel.org>,
LKML <linux-kernel@...r.kernel.org>
Subject: [tip: timers/core] time/sched_clock: Expire timer in hardirq context
The following commit has been merged into the timers/core branch of tip:
Commit-ID: 2c8bd58812ee3dbf0d78b566822f7eacd34bdd7b
Gitweb: https://git.kernel.org/tip/2c8bd58812ee3dbf0d78b566822f7eacd34bdd7b
Author: Ahmed S. Darwish <a.darwish@...utronix.de>
AuthorDate: Mon, 09 Mar 2020 18:15:29
Committer: Thomas Gleixner <tglx@...utronix.de>
CommitterDate: Thu, 19 Mar 2020 09:45:08 +01:00
time/sched_clock: Expire timer in hardirq context
To minimize latency, PREEMPT_RT kernels expires hrtimers in preemptible
softirq context by default. This can be overriden by marking the timer's
expiry with HRTIMER_MODE_HARD.
sched_clock_timer is missing this annotation: if its callback is preempted
and the duration of the preemption exceeds the wrap around time of the
underlying clocksource, sched clock will get out of sync.
Mark the sched_clock_timer for expiry in hard interrupt context.
Signed-off-by: Ahmed S. Darwish <a.darwish@...utronix.de>
Signed-off-by: Thomas Gleixner <tglx@...utronix.de>
Link: https://lkml.kernel.org/r/20200309181529.26558-1-a.darwish@linutronix.de
---
kernel/time/sched_clock.c | 9 +++++----
1 file changed, 5 insertions(+), 4 deletions(-)
diff --git a/kernel/time/sched_clock.c b/kernel/time/sched_clock.c
index e4332e3..fa3f800 100644
--- a/kernel/time/sched_clock.c
+++ b/kernel/time/sched_clock.c
@@ -208,7 +208,8 @@ sched_clock_register(u64 (*read)(void), int bits, unsigned long rate)
if (sched_clock_timer.function != NULL) {
/* update timeout for clock wrap */
- hrtimer_start(&sched_clock_timer, cd.wrap_kt, HRTIMER_MODE_REL);
+ hrtimer_start(&sched_clock_timer, cd.wrap_kt,
+ HRTIMER_MODE_REL_HARD);
}
r = rate;
@@ -254,9 +255,9 @@ void __init generic_sched_clock_init(void)
* Start the timer to keep sched_clock() properly updated and
* sets the initial epoch.
*/
- hrtimer_init(&sched_clock_timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL);
+ hrtimer_init(&sched_clock_timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL_HARD);
sched_clock_timer.function = sched_clock_poll;
- hrtimer_start(&sched_clock_timer, cd.wrap_kt, HRTIMER_MODE_REL);
+ hrtimer_start(&sched_clock_timer, cd.wrap_kt, HRTIMER_MODE_REL_HARD);
}
/*
@@ -293,7 +294,7 @@ void sched_clock_resume(void)
struct clock_read_data *rd = &cd.read_data[0];
rd->epoch_cyc = cd.actual_read_sched_clock();
- hrtimer_start(&sched_clock_timer, cd.wrap_kt, HRTIMER_MODE_REL);
+ hrtimer_start(&sched_clock_timer, cd.wrap_kt, HRTIMER_MODE_REL_HARD);
rd->read_sched_clock = cd.actual_read_sched_clock;
}
Powered by blists - more mailing lists