[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4CFB8D50.6050109@redhat.com>
Date: Sun, 05 Dec 2010 15:02:08 +0200
From: Avi Kivity <avi@...hat.com>
To: Chris Wright <chrisw@...s-sol.org>
CC: Rik van Riel <riel@...hat.com>, kvm@...r.kernel.org,
linux-kernel@...r.kernel.org,
Srivatsa Vaddagiri <vatsa@...ux.vnet.ibm.com>,
Peter Zijlstra <a.p.zijlstra@...llo.nl>,
Ingo Molnar <mingo@...e.hu>,
Anthony Liguori <aliguori@...ux.vnet.ibm.com>
Subject: Re: [RFC PATCH 0/3] directed yield for Pause Loop Exiting
On 12/03/2010 12:41 AM, Chris Wright wrote:
> * Rik van Riel (riel@...hat.com) wrote:
> > When running SMP virtual machines, it is possible for one VCPU to be
> > spinning on a spinlock, while the VCPU that holds the spinlock is not
> > currently running, because the host scheduler preempted it to run
> > something else.
> >
> > Both Intel and AMD CPUs have a feature that detects when a virtual
> > CPU is spinning on a lock and will trap to the host.
> >
> > The current KVM code sleeps for a bit whenever that happens, which
> > results in eg. a 64 VCPU Windows guest taking forever and a bit to
> > boot up. This is because the VCPU holding the lock is actually
> > running and not sleeping, so the pause is counter-productive.
>
> Seems like simply increasing the spin window help in that case? Or is
> it just too contended a lock (I think they use mcs locks, so I can see a
> single wrong sleep causing real contention problems).
It may, but that just pushes the problem to a more contended lock or to
a higher vcpu count. We want something that works after PLE threshold
tuning has failed.
--
error compiling committee.c: too many arguments to function
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists