[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <4C2B4A100200007800008CC5@vpn.id2.novell.com>
Date: Wed, 30 Jun 2010 12:43:44 +0100
From: "Jan Beulich" <JBeulich@...ell.com>
To: "Jeremy Fitzhardinge" <jeremy@...p.org>,
"Peter Zijlstra" <peterz@...radead.org>
Cc: "mingo@...e.hu" <mingo@...e.hu>,
"tglx@...utronix.de" <tglx@...utronix.de>,
"Ky Srinivasan" <KSrinivasan@...ell.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"hpa@...or.com" <hpa@...or.com>
Subject: Re: [PATCH 1/4, v2] x86: enlightenment for ticket spin locks -
base implementation
>>> On 30.06.10 at 11:56, Jeremy Fitzhardinge <jeremy@...p.org> wrote:
> On 06/30/2010 11:11 AM, Peter Zijlstra wrote:
>> On Wed, 2010-06-30 at 10:00 +0100, Jan Beulich wrote:
>>
>>>>>> On 30.06.10 at 10:05, Peter Zijlstra <peterz@...radead.org> wrote:
>>>>>>
>>>> On Tue, 2010-06-29 at 15:31 +0100, Jan Beulich wrote:
>>>>
>>>>> Add optional (alternative instructions based) callout hooks to the
>>>>> contended ticket lock and the ticket unlock paths, to allow hypervisor
>>>>> specific code to be used for reducing/eliminating the bad effects
>>>>> ticket locks have on performance when running virtualized.
>>>>>
>>>> Uhm, I'd much rather see a single alternative implementation, not a
>>>> per-hypervisor lock implementation.
>>>>
>>> How would you imaging this to work? I can't see how the mechanism
>>> could be hypervisor agnostic. Just look at the Xen implementation
>>> (patch 2) - do you really see room for meaningful abstraction there?
>>>
>> I tried not to, it made my eyes bleed..
>>
>> But from what I hear all virt people are suffering from spinlocks (and
>> fair spinlocks in particular), so I was thinking it'd be a good idea to
>> get all interested parties to collaborate on one. Fragmentation like
>> this hardly ever works out well.
>>
>
> The fastpath of the spinlocks can be common, but if it ends up spinning
> too long (however that might be defined), then it needs to call out to a
> hypervisor-specific piece of code which is effectively "yield this vcpu
> until its worth trying again". In Xen we can set up an event channel
> that the waiting CPU can block on, and the current lock holder can
> tickle it when it releases the lock (ideally it would just tickle the
> CPU with the next ticket, but that's a further refinement).
It does tickle just the new owner - that's what the list is for.
Jan
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists