lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 25 Aug 2017 15:57:04 -0700
From:   John Stultz <john.stultz@...aro.org>
To:     lkml <linux-kernel@...r.kernel.org>
Cc:     John Stultz <john.stultz@...aro.org>,
        Thomas Gleixner <tglx@...utronix.de>,
        Ingo Molnar <mingo@...nel.org>,
        Miroslav Lichvar <mlichvar@...hat.com>,
        Richard Cochran <richardcochran@...il.com>,
        Prarit Bhargava <prarit@...hat.com>,
        Stephen Boyd <stephen.boyd@...aro.org>,
        Kevin Brodsky <kevin.brodsky@....com>,
        Will Deacon <will.deacon@....com>,
        Daniel Mentz <danielmentz@...gle.com>,
        Chris Wilson <chris@...is-wilson.co.uk>
Subject: [RFC][PATCH] time: Fix ktime_get_raw() issues caused by incorrect base accumulation

In commit fc6eead7c1e2 ("time: Clean up CLOCK_MONOTONIC_RAW time
handling"), I mistakenly added the following:

 /* Update the monotonic raw base */
 seconds = tk->raw_sec;
 nsec = (u32)(tk->tkr_raw.xtime_nsec >> tk->tkr_raw.shift);
 tk->tkr_raw.base = ns_to_ktime(seconds * NSEC_PER_SEC + nsec);

Which adds the raw_sec value and the shifted down raw xtime_nsec
to the base value.

This is problematic as when calling ktime_get_raw(), we add the
tk->tkr_raw.xtime_nsec and current offset, shift it down and add
it to the raw base.

This results in the shifted down tk->tkr_raw.xtime_nsec being
added twice.

My mistake, was that I was matching the monotonic base logic
above:

 seconds = (u64)(tk->xtime_sec + tk->wall_to_monotonic.tv_sec);
 nsec = (u32) tk->wall_to_monotonic.tv_nsec;
 tk->tkr_mono.base = ns_to_ktime(seconds * NSEC_PER_SEC + nsec);

Which adds the wall_to_monotonic.tv_nsec value, but not the
tk->tkr_mono.xtime_nsec value to the base.

The result of this is that ktime_get_raw() users (which are all
internal users) see the raw time move faster then it should
(the rate at which can vary with the current size of
tkr_raw.xtime_nsec), which has resulted in at least problems
with graphics rendering performance.

To fix this, we simplify the tkr_raw.base accumulation to only
accumulate the raw_sec portion, and do not include the
tkr_raw.xtime_nsec portion, which will be added at read time.

Cc: Thomas Gleixner <tglx@...utronix.de>
Cc: Ingo Molnar <mingo@...nel.org>
Cc: Miroslav Lichvar <mlichvar@...hat.com>
Cc: Richard Cochran <richardcochran@...il.com>
Cc: Prarit Bhargava <prarit@...hat.com>
Cc: Stephen Boyd <stephen.boyd@...aro.org>
Cc: Kevin Brodsky <kevin.brodsky@....com>
Cc: Will Deacon <will.deacon@....com>
Cc: Daniel Mentz <danielmentz@...gle.com>
Cc: Chris Wilson <chris@...is-wilson.co.uk>
Reported-by: Chris Wilson <chris@...is-wilson.co.uk>
Signed-off-by: John Stultz <john.stultz@...aro.org>
---
 kernel/time/timekeeping.c | 4 +---
 1 file changed, 1 insertion(+), 3 deletions(-)

diff --git a/kernel/time/timekeeping.c b/kernel/time/timekeeping.c
index cedafa0..7e7e61c 100644
--- a/kernel/time/timekeeping.c
+++ b/kernel/time/timekeeping.c
@@ -637,9 +637,7 @@ static inline void tk_update_ktime_data(struct timekeeper *tk)
 	tk->ktime_sec = seconds;
 
 	/* Update the monotonic raw base */
-	seconds = tk->raw_sec;
-	nsec = (u32)(tk->tkr_raw.xtime_nsec >> tk->tkr_raw.shift);
-	tk->tkr_raw.base = ns_to_ktime(seconds * NSEC_PER_SEC + nsec);
+	tk->tkr_raw.base = ns_to_ktime(tk->raw_sec * NSEC_PER_SEC);
 }
 
 /* must hold timekeeper_lock */
-- 
2.7.4

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ