lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 8 Mar 2017 13:53:06 -0800
From:   "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
To:     linux-kernel@...r.kernel.org
Cc:     mingo@...hat.com, peterz@...radead.org, fweisbec@...il.com
Subject: [PATCH] clock: Fix smp_processor_id() in preemptible bug

The v4.11-rc1 kernel emits the following splat in some configurations:

[   43.681891] BUG: using smp_processor_id() in preemptible [00000000] code: kworker/3:1/49
[   43.682511] caller is debug_smp_processor_id+0x17/0x20
[   43.682893] CPU: 0 PID: 49 Comm: kworker/3:1 Not tainted 4.11.0-rc1+ #1
[   43.683382] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Bochs 01/01/2011
[   43.683497] Workqueue: events __clear_sched_clock_stable
[   43.683497] Call Trace:
[   43.683497]  dump_stack+0x4f/0x69
[   43.683497]  check_preemption_disabled+0xd9/0xf0
[   43.683497]  debug_smp_processor_id+0x17/0x20
[   43.683497]  __clear_sched_clock_stable+0x11/0x60
[   43.683497]  process_one_work+0x146/0x430
[   43.683497]  worker_thread+0x126/0x490
[   43.683497]  kthread+0xfc/0x130
[   43.683497]  ? process_one_work+0x430/0x430
[   43.683497]  ? kthread_create_on_node+0x40/0x40
[   43.683497]  ? umh_complete+0x30/0x30
[   43.683497]  ? call_usermodehelper_exec_async+0x12a/0x130
[   43.683497]  ret_from_fork+0x29/0x40
[   43.689244] sched_clock: Marking unstable (43688244724, 179505618)<-(43867750342, 0)

This happens because workqueue handlers run with preemption enabled
by default and the new this_scd() function accesses per-CPU variables.
This commit therefore disables preemption across this call to this_scd()
and to the uses of the pointer that it returns.  Lightly tested
successfully on x86.

Signed-off-by: Paul E. McKenney <paulmck@...ux.vnet.ibm.com>
Cc: Ingo Molnar <mingo@...hat.com>
Cc: Peter Zijlstra <peterz@...radead.org>
Cc: Frederic Weisbecker <fweisbec@...il.com>

diff --git a/kernel/sched/clock.c b/kernel/sched/clock.c
index a08795e21628..aa184bea1344 100644
--- a/kernel/sched/clock.c
+++ b/kernel/sched/clock.c
@@ -143,7 +143,7 @@ static void __set_sched_clock_stable(void)
 
 static void __clear_sched_clock_stable(struct work_struct *work)
 {
-	struct sched_clock_data *scd = this_scd();
+	struct sched_clock_data *scd;
 
 	/*
 	 * Attempt to make the stable->unstable transition continuous.
@@ -154,7 +154,10 @@ static void __clear_sched_clock_stable(struct work_struct *work)
 	 *
 	 * Still do what we can.
 	 */
+	preempt_disable();
+	scd = this_scd();
 	gtod_offset = (scd->tick_raw + raw_offset) - (scd->tick_gtod);
+	preempt_enable();
 
 	printk(KERN_INFO "sched_clock: Marking unstable (%lld, %lld)<-(%lld, %lld)\n",
 			scd->tick_gtod, gtod_offset,

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ