lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 19 Dec 2014 11:51:25 -0800
From:	Linus Torvalds <torvalds@...ux-foundation.org>
To:	Dave Jones <davej@...hat.com>, Chris Mason <clm@...com>,
	Mike Galbraith <umgwanakikbuti@...il.com>,
	Ingo Molnar <mingo@...nel.org>,
	Peter Zijlstra <peterz@...radead.org>,
	Dâniel Fraga <fragabr@...il.com>,
	Sasha Levin <sasha.levin@...cle.com>,
	"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
	Suresh Siddha <sbsiddha@...il.com>,
	Oleg Nesterov <oleg@...hat.com>,
	Peter Anvin <hpa@...ux.intel.com>
Subject: Re: frequent lockups in 3.18rc4

On Fri, Dec 19, 2014 at 11:15 AM, Linus Torvalds
<torvalds@...ux-foundation.org> wrote:
>
> In your earlier trace (with spinlock debugging), the softlockup
> detection was in lock_acquire for copy_page_range(), but CPU2 was
> always in that "generic_exec_single" due to a TLB flush from that
> zap_page_range thing again. But there are no timer traces from that
> one, so I dunno.

Ahh, and that's because the TLB flushing is done under the page table
lock these days (see commit 1cf35d47712d: "mm: split 'tlb_flush_mmu()'
into tlb flushing and memory freeing parts").

Which means that if the TLB flushing gets stuck on CPU#2, CPU#1 that
is trying to get the page table lock will be locked up too.

So this is all very consistent, actually. The underlying bug in both
cases seems to be that the IPI for the TLB flushing doesn't happen for
some reason.

In your second trace, that's explained by the fact that CPU0 is in a
timer interrupt. In the first trace with spinlock debugging, no such
obvious explanation exists. It could be that an IPI has gotten lost
for some reason.

However, the first trace does have this:

   NMI backtrace for cpu 3
   INFO: NMI handler (arch_trigger_all_cpu_backtrace_handler) took too
long to run: 66.180 msecs
   CPU: 3 PID: 0 Comm: swapper/3 Not tainted 3.18.0+ #107
   RIP: 0010:   intel_idle+0xdb/0x180
   Code: 31 d2 65 48 8b 34 ...
   INFO: NMI handler (arch_trigger_all_cpu_backtrace_handler) took too
long to run: 95.053 msecs

so something odd is happening (probably on CPU3). It took a long time
to react to the NMI IPI too.

So there's definitely something screwy going on here in IPI-land.

I do note that we depend on the "new mwait" semantics where we do
mwait with interrupts disabled and a non-zero RCX value. Are there
possibly even any known CPU errata in that area? Not that it sounds
likely, but still..

                         Linus
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ