[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <4D363E39.2050100@goop.org>
Date: Tue, 18 Jan 2011 17:28:25 -0800
From: Jeremy Fitzhardinge <jeremy@...p.org>
To: vatsa@...ux.vnet.ibm.com
CC: Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Nick Piggin <npiggin@...e.de>,
Peter Zijlstra <peterz@...radead.org>,
Jan Beulich <JBeulich@...ell.com>, Avi Kivity <avi@...hat.com>,
Xen-devel <xen-devel@...ts.xensource.com>, suzuki@...ibm.com,
Konrad Rzeszutek Wilk <konrad.wilk@...cle.com>
Subject: Re: [PATCH RFC 09/12] xen/pvticketlock: Xen implementation for PV
ticket locks
On 01/18/2011 08:27 AM, Srivatsa Vaddagiri wrote:
>> No, interrupts are disabled while waiting to take the lock, so it isn't
>> possible for an interrupt to come in.
> Where are we disabling interrupts? Is it in xen_poll_irq()?
No, they're already disabled in the generic spinlock code.
arch_spin_lock_flags() can re-enable them if it wants.
>> With the old-style locks it was
>> reasonable to leave interrupts enabled while spinning, but with ticket
>> locks it isn't.
>>
>> (I haven some prototype patches to implement nested spinning of ticket
>> locks,
> Hmm ..where is nested spinning allowed/possible? Process context will
> disable interrupts/bh from wanting the same (spin-)lock it is trying to
> acquire?
If you're in an interrupt-enabled context at the time you're taking an
interrupt-safe spinlock (ie, using spin_lock_irq[save]), then it is (in
principle) valid to leave interrupts enabled until you actually acquire
the lock (obv you must avoid any window with the lock acquired and
interrupts enabled).
We did this with the old-style locks (both native and pv) - it seems
like it should be especially useful for interrupt latency if we end up
waiting for the lock a long time. However, it can't be done with ticket
locks. I also have no idea how often we ended up being able to it in
practice anyway.
J
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists