lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 26 Mar 2015 12:23:22 -0700
From:	John Stultz <john.stultz@...aro.org>
To:	lkml <linux-kernel@...r.kernel.org>
Cc:	Daniel Thompson <daniel.thompson@...aro.org>,
	Russell King <linux@....linux.org.uk>,
	Will Deacon <will.deacon@....com>,
	Catalin Marinas <catalin.marinas@....com>,
	Thomas Gleixner <tglx@...utronix.de>,
	Stephen Boyd <sboyd@...eaurora.org>,
	Peter Zijlstra <peterz@...radead.org>,
	Ingo Molnar <mingo@...nel.org>,
	John Stultz <john.stultz@...aro.org>
Subject: [PATCH 1/5] sched_clock: Match scope of read and write seqcounts

From: Daniel Thompson <daniel.thompson@...aro.org>

Currently the scope of the raw_write_seqcount_begin/end in
sched_clock_register far exceeds the scope of the read section in
sched_clock. This gives the impression of safety during cursory review
but achieves little.

Note that this is likely to be a latent issue at present because
sched_clock_register() is typically called before we enable interrupts,
however the issue does risk bugs being needlessly introduced as the code
evolves.

This patch fixes the problem by increasing the scope of the read locking
performed by sched_clock() to cover all data modified by
sched_clock_register.

We also improve clarity by moving writes to struct clock_data that do
not impact sched_clock() outside of the critical section.

Cc: Russell King <linux@....linux.org.uk>
Cc: Will Deacon <will.deacon@....com>
Cc: Catalin Marinas <catalin.marinas@....com>
Cc: Daniel Thompson <daniel.thompson@...aro.org>
Cc: Thomas Gleixner <tglx@...utronix.de>
Cc: Stephen Boyd <sboyd@...eaurora.org>
Cc: Peter Zijlstra <peterz@...radead.org>
Cc: Ingo Molnar <mingo@...nel.org>
Reviewed-by: Stephen Boyd <sboyd@...eaurora.org>
Acked-by: Peter Zijlstra (Intel) <peterz@...radead.org>
Signed-off-by: Daniel Thompson <daniel.thompson@...aro.org>
[jstultz: Slight rework to apply to tip/timers/core]
Signed-off-by: John Stultz <john.stultz@...aro.org>
---
 kernel/time/sched_clock.c | 26 +++++++++++---------------
 1 file changed, 11 insertions(+), 15 deletions(-)

diff --git a/kernel/time/sched_clock.c b/kernel/time/sched_clock.c
index ca3bc5c..1751e95 100644
--- a/kernel/time/sched_clock.c
+++ b/kernel/time/sched_clock.c
@@ -58,23 +58,21 @@ static inline u64 notrace cyc_to_ns(u64 cyc, u32 mult, u32 shift)
 
 unsigned long long notrace sched_clock(void)
 {
-	u64 epoch_ns;
-	u64 epoch_cyc;
-	u64 cyc;
+	u64 cyc, res;
 	unsigned long seq;
 
-	if (cd.suspended)
-		return cd.epoch_ns;
-
 	do {
 		seq = raw_read_seqcount_begin(&cd.seq);
-		epoch_cyc = cd.epoch_cyc;
-		epoch_ns = cd.epoch_ns;
+
+		res = cd.epoch_ns;
+		if (!cd.suspended) {
+			cyc = read_sched_clock();
+			cyc = (cyc - cd.epoch_cyc) & sched_clock_mask;
+			res += cyc_to_ns(cyc, cd.mult, cd.shift);
+		}
 	} while (read_seqcount_retry(&cd.seq, seq));
 
-	cyc = read_sched_clock();
-	cyc = (cyc - epoch_cyc) & sched_clock_mask;
-	return epoch_ns + cyc_to_ns(cyc, cd.mult, cd.shift);
+	return res;
 }
 
 /*
@@ -111,7 +109,6 @@ void __init sched_clock_register(u64 (*read)(void), int bits,
 {
 	u64 res, wrap, new_mask, new_epoch, cyc, ns;
 	u32 new_mult, new_shift;
-	ktime_t new_wrap_kt;
 	unsigned long r;
 	char r_unit;
 
@@ -124,10 +121,11 @@ void __init sched_clock_register(u64 (*read)(void), int bits,
 	clocks_calc_mult_shift(&new_mult, &new_shift, rate, NSEC_PER_SEC, 3600);
 
 	new_mask = CLOCKSOURCE_MASK(bits);
+	cd.rate = rate;
 
 	/* calculate how many nanosecs until we risk wrapping */
 	wrap = clocks_calc_max_nsecs(new_mult, new_shift, 0, new_mask, NULL);
-	new_wrap_kt = ns_to_ktime(wrap);
+	cd.wrap_kt = ns_to_ktime(wrap);
 
 	/* update epoch for new counter and update epoch_ns from old counter*/
 	new_epoch = read();
@@ -138,8 +136,6 @@ void __init sched_clock_register(u64 (*read)(void), int bits,
 	raw_write_seqcount_begin(&cd.seq);
 	read_sched_clock = read;
 	sched_clock_mask = new_mask;
-	cd.rate = rate;
-	cd.wrap_kt = new_wrap_kt;
 	cd.mult = new_mult;
 	cd.shift = new_shift;
 	cd.epoch_cyc = new_epoch;
-- 
1.9.1

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ