lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <tip-9407f5a7ee77c631d1e100436132437cf6237e45@git.kernel.org>
Date:   Fri, 20 Jul 2018 03:00:50 -0700
From:   tip-bot for Peter Zijlstra <tipbot@...or.com>
To:     linux-tip-commits@...r.kernel.org
Cc:     mingo@...nel.org, tglx@...utronix.de, peterz@...radead.org,
        linux-kernel@...r.kernel.org, pasha.tatashin@...cle.com,
        hpa@...or.com
Subject: [tip:x86/timers] sched/clock: Close a hole in sched_clock_init()

Commit-ID:  9407f5a7ee77c631d1e100436132437cf6237e45
Gitweb:     https://git.kernel.org/tip/9407f5a7ee77c631d1e100436132437cf6237e45
Author:     Peter Zijlstra <peterz@...radead.org>
AuthorDate: Fri, 20 Jul 2018 10:09:11 +0200
Committer:  Thomas Gleixner <tglx@...utronix.de>
CommitDate: Fri, 20 Jul 2018 11:58:00 +0200

sched/clock: Close a hole in sched_clock_init()

All data required for the 'unstable' sched_clock must be set-up _before_
enabling it -- setting sched_clock_running. This includes the
__gtod_offset but also a recent scd stamp.

Make the gtod-offset update also set the csd stamp -- it requires the
same two clock reads _anyway_. This doesn't hurt in the
sched_clock_tick_stable() case and ensures sched_clock_init() gets
everything set-up before use.

Also switch to unconditional IRQ-disable/enable because the static key
stuff already requires this is not ran with IRQs disabled.

Fixes: 857baa87b642 ("sched/clock: Enable sched clock early")
Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
Signed-off-by: Thomas Gleixner <tglx@...utronix.de>
Cc: Pavel Tatashin <pasha.tatashin@...cle.com>
Cc: steven.sistare@...cle.com
Cc: daniel.m.jordan@...cle.com
Cc: linux@...linux.org.uk
Cc: schwidefsky@...ibm.com
Cc: heiko.carstens@...ibm.com
Cc: john.stultz@...aro.org
Cc: sboyd@...eaurora.org
Cc: hpa@...or.com
Cc: douly.fnst@...fujitsu.com
Cc: prarit@...hat.com
Cc: feng.tang@...el.com
Cc: pmladek@...e.com
Cc: gnomes@...rguk.ukuu.org.uk
Cc: linux-s390@...r.kernel.org
Cc: boris.ostrovsky@...cle.com
Cc: jgross@...e.com
Cc: pbonzini@...hat.com
Link: https://lkml.kernel.org/r/20180720080911.GM2494@hirez.programming.kicks-ass.net
---
 kernel/sched/clock.c | 16 ++++++----------
 1 file changed, 6 insertions(+), 10 deletions(-)

diff --git a/kernel/sched/clock.c b/kernel/sched/clock.c
index c5c47ad3f386..811a39aca1ce 100644
--- a/kernel/sched/clock.c
+++ b/kernel/sched/clock.c
@@ -197,13 +197,14 @@ void clear_sched_clock_stable(void)
 
 static void __sched_clock_gtod_offset(void)
 {
-	__gtod_offset = (sched_clock() + __sched_clock_offset) - ktime_get_ns();
+	struct sched_clock_data *scd = this_scd();
+
+	__scd_stamp(scd);
+	__gtod_offset = (scd->tick_raw + __sched_clock_offset) - scd->tick_gtod;
 }
 
 void __init sched_clock_init(void)
 {
-	unsigned long flags;
-
 	/*
 	 * Set __gtod_offset such that once we mark sched_clock_running,
 	 * sched_clock_tick() continues where sched_clock() left off.
@@ -211,16 +212,11 @@ void __init sched_clock_init(void)
 	 * Even if TSC is buggered, we're still UP at this point so it
 	 * can't really be out of sync.
 	 */
-	local_irq_save(flags);
+	local_irq_disable();
 	__sched_clock_gtod_offset();
-	local_irq_restore(flags);
+	local_irq_enable();
 
 	static_branch_inc(&sched_clock_running);
-
-	/* Now that sched_clock_running is set adjust scd */
-	local_irq_save(flags);
-	sched_clock_tick();
-	local_irq_restore(flags);
 }
 /*
  * We run this as late_initcall() such that it runs after all built-in drivers,

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ