lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:	Fri, 30 Jan 2015 14:16:10 -0800
From:	"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
To:	linux-kernel@...r.kernel.org
Cc:	mingo@...nel.org, laijs@...fujitsu.com, dipankar@...ibm.com,
	akpm@...ux-foundation.org, mathieu.desnoyers@...icios.com,
	josh@...htriplett.org, tglx@...utronix.de, peterz@...radead.org,
	rostedt@...dmis.org, dhowells@...hat.com, edumazet@...gle.com,
	dvhart@...ux.intel.com, fweisbec@...il.com, oleg@...hat.com,
	bobby.prani@...il.com, tianyu.lan@...el.com, bp@...e.de,
	toshi.kani@...com, imammedo@...hat.com
Subject: Re: [PATCH RFC tip/core/rcu 4/4] rcu: Handle outgoing CPUs on exit
 from idle loop

On Thu, Jan 29, 2015 at 04:20:04PM -0800, Paul E. McKenney wrote:
> From: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
> 
> This commit informs RCU of an outgoing CPU just before that CPU invokes
> arch_cpu_idle_dead() during its last pass through the idle loop (via a
> new CPU_DYING_IDLE notifier value).  This change means that RCU need not
> deal with outgoing CPUs passing through the scheduler after informing
> RCU that they are no longer online.  Note that removing the CPU from
> the rcu_node ->qsmaskinit bit masks is done at CPU_DYING_IDLE time,
> and orphaning callbacks is still done at CPU_DEAD time, the reason being
> that at CPU_DEAD time we have another CPU that can adopt them.

And this exposed the fact that arch_cpu_idle_dead(), which is executed
on the offlined CPU, has RCU read-side critical sections.  Sometimes.
The following patch fixes this, though I would welcome improved ways
of handling this that don't involve RCU read-side critical sections
on offlined CPUs.

							Thanx, Paul

------------------------------------------------------------------------

cpu: Stop newly offlined CPU from using RCU readers

RCU ignores offlined CPUs, so they cannot safely run RCU read-side code.
(They -can- use SRCU, but not RCU.)  This means that any use of RCU
during or after the call to arch_cpu_idle_dead().  Unfortunately,
commit 2ed53c0d6cc99 added a complete() call, which will contain RCU
read-side critical sections if there is a task waiting to be awakened.

Which, as it turns out, there almost never is.  In my qemu/KVM testing,
the to-be-awakened task is not yet asleep more than 99.5% of the time.
In current mainline, failure is even harder to reproduce, requiring a
virtualized environment that delays the outgoing CPU by at least three
jiffies between the time it exits its stop_machine() task at CPU_DYING
time and the time it calls arch_cpu_idle_dead() from the idle loop.

This suggests moving back to the polling loop, but using a one-jiffy wait
instead of the old 100-millisecond wait.  Most of the time, the loop
will exit without waiting at all, and almost all of the remaining uses
will wait only one jiffy.  Of course, if this proves to be a problem,
it would be easy to make the first few passes through the loop wait only
(say) ten microseconds.

This commit therefore reverts back to a polling loop, but with a one-jiffy
wait instead of the old 100-millisecond wait.

Signed-off-by: Paul E. McKenney <paulmck@...ux.vnet.ibm.com>
Cc: Lan Tianyu <tianyu.lan@...el.com>
Cc: Prarit Bhargava <prarit@...hat.com>
Cc: Borislav Petkov <bp@...e.de>
Cc: Peter Zijlstra <peterz@...radead.org>
Cc: Steven Rostedt <rostedt@...dmis.org>
Cc: Toshi Kani <toshi.kani@...com>
Cc: Igor Mammedov <imammedo@...hat.com>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>

diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c
index 6d7022c683e3..cda3f4158d1a 100644
--- a/arch/x86/kernel/smpboot.c
+++ b/arch/x86/kernel/smpboot.c
@@ -1297,14 +1297,10 @@ static void __ref remove_cpu_from_maps(int cpu)
 	numa_remove_cpu(cpu);
 }
 
-static DEFINE_PER_CPU(struct completion, die_complete);
-
 void cpu_disable_common(void)
 {
 	int cpu = smp_processor_id();
 
-	init_completion(&per_cpu(die_complete, smp_processor_id()));
-
 	remove_siblinginfo(cpu);
 
 	/* It's now safe to remove this processor from the online map */
@@ -1330,7 +1326,10 @@ int native_cpu_disable(void)
 
 void cpu_die_common(unsigned int cpu)
 {
-	wait_for_completion_timeout(&per_cpu(die_complete, cpu), HZ);
+	int i = 0;
+
+	while (this_cpu_read(cpu_state) != CPU_DEAD && ++i > HZ)
+		schedule_timeout_uninterruptible(1);
 }
 
 void native_cpu_die(unsigned int cpu)
@@ -1357,7 +1356,6 @@ void play_dead_common(void)
 	mb();
 	/* Ack it */
 	__this_cpu_write(cpu_state, CPU_DEAD);
-	complete(&per_cpu(die_complete, smp_processor_id()));
 
 	/*
 	 * With physical CPU hotplug, we should halt the cpu

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ