[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4C2B14BE.6080505@goop.org>
Date: Wed, 30 Jun 2010 11:56:14 +0200
From: Jeremy Fitzhardinge <jeremy@...p.org>
To: Peter Zijlstra <peterz@...radead.org>
CC: Jan Beulich <JBeulich@...ell.com>, "mingo@...e.hu" <mingo@...e.hu>,
"tglx@...utronix.de" <tglx@...utronix.de>,
ksrinivasan <ksrinivasan@...ell.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"hpa@...or.com" <hpa@...or.com>
Subject: Re: [PATCH 1/4, v2] x86: enlightenment for ticket spin locks - base
implementation
On 06/30/2010 11:11 AM, Peter Zijlstra wrote:
> On Wed, 2010-06-30 at 10:00 +0100, Jan Beulich wrote:
>
>>>>> On 30.06.10 at 10:05, Peter Zijlstra <peterz@...radead.org> wrote:
>>>>>
>>> On Tue, 2010-06-29 at 15:31 +0100, Jan Beulich wrote:
>>>
>>>> Add optional (alternative instructions based) callout hooks to the
>>>> contended ticket lock and the ticket unlock paths, to allow hypervisor
>>>> specific code to be used for reducing/eliminating the bad effects
>>>> ticket locks have on performance when running virtualized.
>>>>
>>> Uhm, I'd much rather see a single alternative implementation, not a
>>> per-hypervisor lock implementation.
>>>
>> How would you imaging this to work? I can't see how the mechanism
>> could be hypervisor agnostic. Just look at the Xen implementation
>> (patch 2) - do you really see room for meaningful abstraction there?
>>
> I tried not to, it made my eyes bleed..
>
> But from what I hear all virt people are suffering from spinlocks (and
> fair spinlocks in particular), so I was thinking it'd be a good idea to
> get all interested parties to collaborate on one. Fragmentation like
> this hardly ever works out well.
>
The fastpath of the spinlocks can be common, but if it ends up spinning
too long (however that might be defined), then it needs to call out to a
hypervisor-specific piece of code which is effectively "yield this vcpu
until its worth trying again". In Xen we can set up an event channel
that the waiting CPU can block on, and the current lock holder can
tickle it when it releases the lock (ideally it would just tickle the
CPU with the next ticket, but that's a further refinement).
I'm not sure what the corresponding implementation for KVM or HyperV
would look like. Modern Intel chips have a "do a VMEXIT if you've run
pause in a tight loop for too long" feature, which deals with the
"spinning too long" part, but I'm not sure about the blocking mechanism
(something based on monitor/mwait perhaps).
J
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists