lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150416163140.GA17024@gmail.com>
Date:	Thu, 16 Apr 2015 18:31:40 +0200
From:	Ingo Molnar <mingo@...nel.org>
To:	Chris J Arges <chris.j.arges@...onical.com>
Cc:	Linus Torvalds <torvalds@...ux-foundation.org>,
	Rafael David Tinoco <inaddy@...ntu.com>,
	Peter Anvin <hpa@...or.com>,
	Jiang Liu <jiang.liu@...ux.intel.com>,
	Peter Zijlstra <peterz@...radead.org>,
	LKML <linux-kernel@...r.kernel.org>,
	Jens Axboe <axboe@...nel.dk>,
	Frederic Weisbecker <fweisbec@...il.com>,
	Gema Gomez <gema.gomez-solano@...onical.com>,
	the arch/x86 maintainers <x86@...nel.org>
Subject: Re: [PATCH] smp/call: Detect stuck CSD locks


* Chris J Arges <chris.j.arges@...onical.com> wrote:

> A previous backtrace of a 3.19 series kernel is here and showing interrupts
> enabled on both CPUs on L1:
> https://lkml.org/lkml/2015/2/23/234
> http://people.canonical.com/~inaddy/lp1413540/BACKTRACES.txt
>
> [...]
>
> Yes, I think at this point I'll go through the various backtraces 
> and try to narrow things down. I think overall we're seeing a single 
> effect from multiple code paths.

Now what would be nice is to observe it whether the CPU that is not 
doing the CSD wait is truly locked up.

It might be executing random KVM-ish workloads and the various 
backtraces we've seen so far are just a random sample of those 
workloads (from L1's perspective).

Yet the fact that the kdump's NMI gets through is a strong indication 
that the CPU's APIC is fine: NMIs are essentially IPIs too, they just 
go to the NMI vector, which punches through irqs-off regions.

So maybe another debug trick would be useful: instead of re-sending 
the IPI, send a single non-destructive NMI every second or so, 
creating a backtrace on the other CPU. From that we'll be able to see 
whether it's locked up permanently in an irqs-off section.

I.e. basically you could try to trigger the 'show NMI backtraces on 
all CPUs' logic when the lockup triggers, and repeat it every couple 
of seconds.

The simplest method to do that would be to call:

	trigger_all_cpu_backtrace();

every couple of seconds, in the CSD polling loop - after the initial 
timeout has passed. I'd suggest to collect at least 10 pairs of 
backtraces that way.

Thanks,

	Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ