lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 02 Sep 2011 14:50:53 -0700
From:	Jeremy Fitzhardinge <jeremy@...p.org>
To:	Peter Zijlstra <peterz@...radead.org>
CC:	"H. Peter Anvin" <hpa@...or.com>,
	Linus Torvalds <torvalds@...ux-foundation.org>,
	Ingo Molnar <mingo@...e.hu>,
	the arch/x86 maintainers <x86@...nel.org>,
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
	Nick Piggin <npiggin@...nel.dk>, Avi Kivity <avi@...hat.com>,
	Marcelo Tosatti <mtosatti@...hat.com>,
	KVM <kvm@...r.kernel.org>, Andi Kleen <andi@...stfloor.org>,
	Xen Devel <xen-devel@...ts.xensource.com>,
	Jeremy Fitzhardinge <jeremy.fitzhardinge@...rix.com>,
	Stefano Stabellini <stefano.stabellini@...citrix.com>
Subject: Re: [PATCH 08/13] xen/pvticketlock: disable interrupts while blocking

On 09/02/2011 01:47 PM, Peter Zijlstra wrote:
> On Fri, 2011-09-02 at 12:29 -0700, Jeremy Fitzhardinge wrote:
>>> I know that its generally considered bad form, but there's at least one
>>> spinlock that's only taken from NMI context and thus hasn't got any
>>> deadlock potential.
>> Which one? 
> arch/x86/kernel/traps.c:nmi_reason_lock
>
> It serializes NMI access to the NMI reason port across CPUs.

Ah, OK.  Well, that will never happen in a PV Xen guest.  But PV
ticketlocks are equally applicable to an HVM Xen domain (and KVM guest),
so I guess there's at least some chance there could be a virtual
emulated NMI.  Maybe?  Does qemu do that kind of thing?

But, erm, does that even make sense?  I'm assuming the NMI reason port
tells the CPU why it got an NMI.  If multiple CPUs can get NMIs and
there's only a single reason port, then doesn't that mean that either 1)
they all got the NMI for the same reason, or 2) having a single port is
inherently racy?  How does the locking actually work there?

    J
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ