lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 06 Sep 2011 11:07:26 -0700
From:	Jeremy Fitzhardinge <jeremy@...p.org>
To:	Don Zickus <dzickus@...hat.com>
CC:	Peter Zijlstra <peterz@...radead.org>,
	"H. Peter Anvin" <hpa@...or.com>,
	Linus Torvalds <torvalds@...ux-foundation.org>,
	Ingo Molnar <mingo@...e.hu>,
	the arch/x86 maintainers <x86@...nel.org>,
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
	Nick Piggin <npiggin@...nel.dk>, Avi Kivity <avi@...hat.com>,
	Marcelo Tosatti <mtosatti@...hat.com>,
	KVM <kvm@...r.kernel.org>, Andi Kleen <andi@...stfloor.org>,
	Xen Devel <xen-devel@...ts.xensource.com>,
	Jeremy Fitzhardinge <jeremy.fitzhardinge@...rix.com>,
	Stefano Stabellini <stefano.stabellini@...citrix.com>
Subject: Re: [PATCH 08/13] xen/pvticketlock: disable interrupts while blocking

On 09/06/2011 08:14 AM, Don Zickus wrote:
> On Fri, Sep 02, 2011 at 02:50:53PM -0700, Jeremy Fitzhardinge wrote:
>> On 09/02/2011 01:47 PM, Peter Zijlstra wrote:
>>> On Fri, 2011-09-02 at 12:29 -0700, Jeremy Fitzhardinge wrote:
>>>>> I know that its generally considered bad form, but there's at least one
>>>>> spinlock that's only taken from NMI context and thus hasn't got any
>>>>> deadlock potential.
>>>> Which one? 
>>> arch/x86/kernel/traps.c:nmi_reason_lock
>>>
>>> It serializes NMI access to the NMI reason port across CPUs.
>> Ah, OK.  Well, that will never happen in a PV Xen guest.  But PV
>> ticketlocks are equally applicable to an HVM Xen domain (and KVM guest),
>> so I guess there's at least some chance there could be a virtual
>> emulated NMI.  Maybe?  Does qemu do that kind of thing?
>>
>> But, erm, does that even make sense?  I'm assuming the NMI reason port
>> tells the CPU why it got an NMI.  If multiple CPUs can get NMIs and
>> there's only a single reason port, then doesn't that mean that either 1)
>> they all got the NMI for the same reason, or 2) having a single port is
>> inherently racy?  How does the locking actually work there?
> The reason port is for an external/system NMI.  All the IPI-NMI don't need
> to access this register to process their handlers, ie perf.  I think in
> general the IOAPIC is configured to deliver the external NMI to one cpu,
> usually the bsp cpu.  However, there has been a slow movement to free the
> bsp cpu from exceptions like this to allow one to eventually hot-swap the
> bsp cpu.  The spin locks in that code were an attempt to be more abstract
> about who really gets the external NMI.  Of course SGI's box is setup to
> deliver an external NMI to all cpus to dump the stack when the system
> isn't behaving.
>
> This is a very low usage NMI (in fact almost all cases lead to loud
> console messages).
>
> Hope that clears up some of the confusion.

Hm, not really.

What does it mean if two CPUs go down that path?  Should one do some NMI
processing while the other waits around for it to finish, and then do
some NMI processing on its own?

It sounds like that could only happen if you reroute NMI from one CPU to
another while the first CPU is actually in the middle of processing an
NMI - in which case, shouldn't the code doing the re-routing be taking
the spinlock?

Or perhaps a spinlock isn't the right primitive to use at all?  Couldn't
the second CPU just set a flag/counter (using something like an atomic
add/cmpxchg/etc) to make the first CPU process the second NMI?

But on the other hand, I don't really care if you can say that this path
will never be called in a virtual machine.

    J
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ