lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 26 May 2017 20:33:54 -0700
From:   John Stultz <john.stultz@...aro.org>
To:     lkml <linux-kernel@...r.kernel.org>
Cc:     Will Deacon <will.deacon@....com>,
        Thomas Gleixner <tglx@...utronix.de>,
        Ingo Molnar <mingo@...nel.org>,
        Miroslav Lichvar <mlichvar@...hat.com>,
        Richard Cochran <richardcochran@...il.com>,
        Prarit Bhargava <prarit@...hat.com>,
        Stephen Boyd <stephen.boyd@...aro.org>,
        Kevin Brodsky <kevin.brodsky@....com>,
        Daniel Mentz <danielmentz@...gle.com>,
        John Stultz <john.stultz@...aro.org>
Subject: [RFC][PATCH 3/4] arm64: vdso: Fix nsec handling for CLOCK_MONOTONIC_RAW

From: Will Deacon <will.deacon@....com>

Commit 45a7905fc48f ("arm64: vdso: defer shifting of nanosecond
component of timespec") fixed sub-ns inaccuracies in our vDSO
clock_gettime implementation by deferring the right-shift of the
nanoseconds components until after the timespec addition, which
operates on left-shifted values. That worked nicely until
support for CLOCK_MONOTONIC_RAW was added in 49eea433b326
("arm64: Add support for CLOCK_MONOTONIC_RAW in clock_gettime()
vDSO"). Noticing that the core timekeeping code never set
tkr_raw.xtime_nsec, the vDSO implementation didn't bother
exposing it via the data page and instead took the unshifted
tk->raw_time.tv_nsec value which was then immediately shifted
left in the vDSO code.

Now that the core code is actually setting tkr_raw.xtime_nsec,
we need to take that into account in the vDSO by adding it to
the shifted raw_time value. Rather than do that at each use (and
expand the data page in the process), instead perform the
shift/addition operation when populating the data page and
remove the shift from the vDSO code entirely.

Cc: Thomas Gleixner <tglx@...utronix.de>
Cc: Ingo Molnar <mingo@...nel.org>
Cc: Miroslav Lichvar <mlichvar@...hat.com>
Cc: Richard Cochran <richardcochran@...il.com>
Cc: Prarit Bhargava <prarit@...hat.com>
Cc: Stephen Boyd <stephen.boyd@...aro.org>
Cc: Kevin Brodsky <kevin.brodsky@....com>
Cc: Will Deacon <will.deacon@....com>
Cc: Daniel Mentz <danielmentz@...gle.com>
Reported-by: John Stultz <john.stultz@...aro.org>
Acked-by: Acked-by: Kevin Brodsky <kevin.brodsky@....com>
Signed-off-by: Will Deacon <will.deacon@....com>
[jstultz: minor whitespace tweak]
Signed-off-by: John Stultz <john.stultz@...aro.org>
---
 arch/arm64/kernel/vdso.c              | 5 +++--
 arch/arm64/kernel/vdso/gettimeofday.S | 1 -
 2 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/arch/arm64/kernel/vdso.c b/arch/arm64/kernel/vdso.c
index 41b6e31..d0cb007 100644
--- a/arch/arm64/kernel/vdso.c
+++ b/arch/arm64/kernel/vdso.c
@@ -221,10 +221,11 @@ void update_vsyscall(struct timekeeper *tk)
 		/* tkr_mono.cycle_last == tkr_raw.cycle_last */
 		vdso_data->cs_cycle_last	= tk->tkr_mono.cycle_last;
 		vdso_data->raw_time_sec		= tk->raw_time.tv_sec;
-		vdso_data->raw_time_nsec	= tk->raw_time.tv_nsec;
+		vdso_data->raw_time_nsec	= (tk->raw_time.tv_nsec <<
+						   tk->tkr_raw.shift) +
+						  tk->tkr_raw.xtime_nsec;
 		vdso_data->xtime_clock_sec	= tk->xtime_sec;
 		vdso_data->xtime_clock_nsec	= tk->tkr_mono.xtime_nsec;
-		/* tkr_raw.xtime_nsec == 0 */
 		vdso_data->cs_mono_mult		= tk->tkr_mono.mult;
 		vdso_data->cs_raw_mult		= tk->tkr_raw.mult;
 		/* tkr_mono.shift == tkr_raw.shift */
diff --git a/arch/arm64/kernel/vdso/gettimeofday.S b/arch/arm64/kernel/vdso/gettimeofday.S
index e00b467..76320e9 100644
--- a/arch/arm64/kernel/vdso/gettimeofday.S
+++ b/arch/arm64/kernel/vdso/gettimeofday.S
@@ -256,7 +256,6 @@ monotonic_raw:
 	seqcnt_check fail=monotonic_raw
 
 	/* All computations are done with left-shifted nsecs. */
-	lsl	x14, x14, x12
 	get_nsec_per_sec res=x9
 	lsl	x9, x9, x12
 
-- 
2.7.4

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ