lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150413035616.GA24037@canonical.com>
Date:	Sun, 12 Apr 2015 22:56:17 -0500
From:	Chris J Arges <chris.j.arges@...onical.com>
To:	Ingo Molnar <mingo@...nel.org>
Cc:	Linus Torvalds <torvalds@...ux-foundation.org>,
	Rafael David Tinoco <inaddy@...ntu.com>,
	Peter Anvin <hpa@...or.com>,
	Jiang Liu <jiang.liu@...ux.intel.com>,
	Peter Zijlstra <peterz@...radead.org>,
	LKML <linux-kernel@...r.kernel.org>,
	Jens Axboe <axboe@...nel.dk>,
	Frederic Weisbecker <fweisbec@...il.com>,
	Gema Gomez <gema.gomez-solano@...onical.com>,
	the arch/x86 maintainers <x86@...nel.org>
Subject: Re: [PATCH] smp/call: Detect stuck CSD locks

<snip> 
> So it would be really important to see the stack dump of CPU#0 at this 
> point, and also do an APIC state dump of it.
> 
> Because from previous dumps it appeared that the 'stuck' CPU was just 
> in its idle loop - which is 'impossible' as the idle loop should still 
> allow APIC irqs arriving.
> 
> This behavior can only happen if:
> 
> 	- CPU#0 has irqs disabled perpetually. A dump of CPU#0 should
> 	  tell us where it's executing. This has actually a fair 
> 	  chance to be the case as it actually happened in a fair 
> 	  number of bugs in the past, but I thought from past dumps 
> 	  you guys provided that this possibility was excluded ... but 
> 	  it merits re-examination with the debug patches applied.
> 
> 	- the APIC on CPU#0 is unacked and has queued up so many IPIs 
> 	  that it starts rejecting them. I'm not sure that's even 
> 	  possible on KVM though. I'm not sure that's even possible on 
> 	  KVM, unless part of the hardware virtualizes the APIC. One 
> 	  other thing that talks against this scenario is that NMIs 
> 	  appear to be reaching through to CPU#0: the crash dumps and 
> 	  dump-on-all-cpus NMI callbacks worked fine.
> 
> 	- the APIC on CPU#0 is in some weird state well outside of its 
> 	  Linux programming model (TPR set wrong, etc. etc.). There's 
> 	  literally a myriad of ways an APIC can be configured to not 
> 	  receive IPIs: but I've never actually seen this happen under 
> 	  Linux, as it needs complicated writes to specialized APIC 
> 	  registers, and we don't actually reconfigure the APIC in any 
> 	  serious fashion aside bootup. Low likelihood but not 
> 	  impossible. Again, NMIs reaching through make this situation 
> 	  less likely.
> 
> 	- CPU#0 having a bad IDT and essentially ignoring certain 
> 	  IPIs. This presumes some serious but very targeted memory 
> 	  corruption. Lowest likelihood.
> 
> 	- ... other failure modes that elude me. Neither of the 
> 	  scenarios above strike me as particularly plausible - but 
> 	  something must be causing the lockup, so ...
> 
> In any case, something got seriously messed up on CPU#0, and stays 
> messed up during the lockup, and it would help a lot figuring out 
> exactly what, by further examining its state.
> 
> Note, it might also be useful to dump KVM's state of the APIC of 
> CPU#0, to see why _it_ isn't sending (and injecting) the lapic IRQ 
> into CPU#0. By all means it should. [Maybe take a look at CPU#1 as 
> well, to make sure the IPI was actually generated.]
> 
> It should be much easier to figure this out on the KVM side than on 
> the native hardware side, which emulates the lapic to a large degree, 
> so we can see 'hardware state' directly. If we are lucky then the KVM 
> problem mirrors the native hardware problem.
> 
> Btw., it might also be helpful to try to turn off hardware assisted 
> APIC virtualization on the KVM side, to make the APIC purely software 
> emulated. If this makes the bug go away magically then this raises the 
> likelihood that the bug is really hardware APIC related.
>
> I don't know what the magic incantation is to make 'pure software 
> APIC' happen on KVM and Qemu though.
> 
> Thanks,
> 
> 	Ingo
>

Ingo,

/sys/module/kvm_intel/parameters/enable_apicv on the affected hardware is not
enabled, and unfortunately my hardware doesn't have the necessary features
to enable it. So we are dealing with KVM's lapic implementation only.

FYI, I'm working on getting better data at the moment and here is my approach:
* For the L0 kernel:
 - In arch/x86/kvm/lapic.c, I enabled 'apic_debug' to get more output (and print
   the addresses of various useful structures)
 - Setup crash to live dump kvm_lapic structures and associated registers for
   both vCPUs
* For the L1 kernel:
 - Dump a stacktrace when we detect a lockup.
 - Detect a lockup and try to not alter the state.
 - Have a reliable signal such that the L0 hypervisor can dump the lapic
   structures and registers when csd_lock_wait detects a softlockup.

Hopefully I can make progress and present meaningful results in my next update.

Thanks,
--chris j arges


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ