[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4C053ACC.5020708@redhat.com>
Date: Tue, 01 Jun 2010 19:52:28 +0300
From: Avi Kivity <avi@...hat.com>
To: Andi Kleen <andi@...stfloor.org>
CC: Gleb Natapov <gleb@...hat.com>, linux-kernel@...r.kernel.org,
kvm@...r.kernel.org, hpa@...or.com, mingo@...e.hu, npiggin@...e.de,
tglx@...utronix.de, mtosatti@...hat.com
Subject: Re: [PATCH] use unfair spinlock when running on hypervisor.
On 06/01/2010 07:38 PM, Andi Kleen wrote:
>>> Your new code would starve again, right?
>>>
>>>
>> Yes, of course it may starve with unfair spinlock. Since vcpus are not
>> always running there is much smaller chance then vcpu on remote memory
>> node will starve forever. Old kernels with unfair spinlocks are running
>> fine in VMs on NUMA machines with various loads.
>>
> Try it on a NUMA system with unfair memory.
>
We are running everything on NUMA (since all modern machines are now
NUMA). At what scale do the issues become observable?
>> I understand that reason and do not propose to get back to old spinlock
>> on physical HW! But with virtualization performance hit is unbearable.
>>
> Extreme unfairness can be unbearable too.
>
Well, the question is what happens first. In our experience, vcpu
overcommit is a lot more painful. People will never see the NUMA
unfairness issue if they can't use kvm due to the vcpu overcommit problem.
What I'd like to see eventually is a short-term-unfair, long-term-fair
spinlock. Might make sense for bare metal as well. But it won't be
easy to write.
--
error compiling committee.c: too many arguments to function
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists