[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20100602085055.GA14221@basil.fritz.box>
Date: Wed, 2 Jun 2010 10:50:55 +0200
From: Andi Kleen <andi@...stfloor.org>
To: Avi Kivity <avi@...hat.com>
Cc: Andi Kleen <andi@...stfloor.org>, Gleb Natapov <gleb@...hat.com>,
linux-kernel@...r.kernel.org, kvm@...r.kernel.org, hpa@...or.com,
mingo@...e.hu, npiggin@...e.de, tglx@...utronix.de,
mtosatti@...hat.com
Subject: Re: [PATCH] use unfair spinlock when running on hypervisor.
On Wed, Jun 02, 2010 at 05:51:14AM +0300, Avi Kivity wrote:
> On 06/01/2010 08:27 PM, Andi Kleen wrote:
>> On Tue, Jun 01, 2010 at 07:52:28PM +0300, Avi Kivity wrote:
>>
>>> We are running everything on NUMA (since all modern machines are now NUMA).
>>> At what scale do the issues become observable?
>>>
>> On Intel platforms it's visible starting with 4 sockets.
>>
>
> Can you recommend a benchmark that shows bad behaviour? I'll run it with
Pretty much anything with high lock contention.
> ticket spinlocks and Gleb's patch. I have a 4-way Nehalem-EX, presumably
> the huge number of threads will magnify the problem even more there.
Yes more threads cause more lock contention too.
> Do you have any idea how we can tackle both problems?
Apparently Xen has something, perhaps that can be leveraged
(but I haven't looked at their solution in detail)
Otherwise I would probably try to start with a adaptive
spinlock that at some point calls into the HV (or updates
shared memory?), like john cooper suggested. The tricky part here would
be to find the thresholds and fit that state into
paravirt ops and the standard spinlock_t.
-Andi
--
ak@...ux.intel.com -- Speaking for myself only.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists