[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4D22E3C0.7010808@redhat.com>
Date: Tue, 04 Jan 2011 11:09:20 +0200
From: Avi Kivity <avi@...hat.com>
To: Mike Galbraith <efault@....de>
CC: Rik van Riel <riel@...hat.com>, kvm@...r.kernel.org,
linux-kernel@...r.kernel.org,
Srivatsa Vaddagiri <vatsa@...ux.vnet.ibm.com>,
Peter Zijlstra <a.p.zijlstra@...llo.nl>,
Chris Wright <chrisw@...s-sol.org>
Subject: Re: [RFC -v3 PATCH 0/3] directed yield for Pause Loop Exiting
On 01/04/2011 08:42 AM, Mike Galbraith wrote:
> A couple questions.
>
> On Mon, 2011-01-03 at 16:26 -0500, Rik van Riel wrote:
> > When running SMP virtual machines, it is possible for one VCPU to be
> > spinning on a spinlock, while the VCPU that holds the spinlock is not
> > currently running, because the host scheduler preempted it to run
> > something else.
>
> Do you have any numbers?
>
> If I were to, say, run a 256 CPU VM on my quad, would this help me get
> more hackbench or whatever oomph from my (256X80386/20:) box?
First of all, you can't run 256 guests on x86 kvm. Second, you'll never
see better performance when you overcommit. What this patchset does is
reduce the degradation from utterly ridiculous to something manageable.
This allows a host to deliver reasonable performance when overcommitted
vcpus are actually used, but it's not a good idea to run 64 vcpus on a
32 cpu host.
> > Both Intel and AMD CPUs have a feature that detects when a virtual
> > CPU is spinning on a lock and will trap to the host.
>
> Does an Intel Q6600 have this trap gizmo (iow will this do anything at
> all for my little box if I were to try it out).
Likely not. Run
http://git.kernel.org/?p=virt/kvm/qemu-kvm.git;a=blob_plain;f=kvm/scripts/vmxcap;hb=HEAD,
look for 'PAUSE-loop exiting'. I think the first processors to include
them were the Nehalem-EXs, and Westmeres have them as well.
--
error compiling committee.c: too many arguments to function
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists