[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <tip-27727df240c7cc84f2ba6047c6f18d5addfd25ef@git.kernel.org>
Date:   Wed, 24 Aug 2016 00:40:57 -0700
From:   tip-bot for John Stultz <tipbot@...or.com>
To:     linux-tip-commits@...r.kernel.org
Cc:     tglx@...utronix.de, rostedt@...dmis.org, mingo@...nel.org,
        peterz@...radead.org, linux-kernel@...r.kernel.org, hpa@...or.com,
        john.stultz@...aro.org, stable@...r.kernel.org
Subject: [tip:timers/urgent] timekeeping: Avoid taking lock in NMI path with
 CONFIG_DEBUG_TIMEKEEPING
Commit-ID:  27727df240c7cc84f2ba6047c6f18d5addfd25ef
Gitweb:     http://git.kernel.org/tip/27727df240c7cc84f2ba6047c6f18d5addfd25ef
Author:     John Stultz <john.stultz@...aro.org>
AuthorDate: Tue, 23 Aug 2016 16:08:21 -0700
Committer:  Thomas Gleixner <tglx@...utronix.de>
CommitDate: Wed, 24 Aug 2016 09:34:31 +0200
timekeeping: Avoid taking lock in NMI path with CONFIG_DEBUG_TIMEKEEPING
When I added some extra sanity checking in timekeeping_get_ns() under
CONFIG_DEBUG_TIMEKEEPING, I missed that the NMI safe __ktime_get_fast_ns()
method was using timekeeping_get_ns().
Thus the locking added to the debug checks broke the NMI-safety of
__ktime_get_fast_ns().
This patch open-codes the timekeeping_get_ns() logic for
__ktime_get_fast_ns(), so can avoid any deadlocks in NMI.
Fixes: 4ca22c2648f9 "timekeeping: Add warnings when overflows or underflows are observed"
Reported-by: Steven Rostedt <rostedt@...dmis.org>
Reported-by: Peter Zijlstra <peterz@...radead.org>
Signed-off-by: John Stultz <john.stultz@...aro.org>
Cc: stable <stable@...r.kernel.org>
Link: http://lkml.kernel.org/r/1471993702-29148-2-git-send-email-john.stultz@linaro.org
Signed-off-by: Thomas Gleixner <tglx@...utronix.de>
---
 kernel/time/timekeeping.c | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/kernel/time/timekeeping.c b/kernel/time/timekeeping.c
index 3b65746..e07fb09 100644
--- a/kernel/time/timekeeping.c
+++ b/kernel/time/timekeeping.c
@@ -401,7 +401,10 @@ static __always_inline u64 __ktime_get_fast_ns(struct tk_fast *tkf)
 	do {
 		seq = raw_read_seqcount_latch(&tkf->seq);
 		tkr = tkf->base + (seq & 0x01);
-		now = ktime_to_ns(tkr->base) + timekeeping_get_ns(tkr);
+		now = ktime_to_ns(tkr->base);
+
+		now += clocksource_delta(tkr->read(tkr->clock),
+					 tkr->cycle_last, tkr->mask);
 	} while (read_seqcount_retry(&tkf->seq, seq));
 
 	return now;
Powered by blists - more mailing lists
 
