[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <C4FA8004DBDF864387014E5EA4383B0F580BD0F5E7@GVW1336EXC.americas.hpqcorp.net>
Date: Thu, 28 Jun 2012 23:55:54 +0100
From: "Vinod, Chegu" <chegu_vinod@...com>
To: Raghavendra K T <raghavendra.kt@...ux.vnet.ibm.com>,
Andrew Jones <drjones@...hat.com>
CC: Rik van Riel <riel@...hat.com>,
Marcelo Tosatti <mtosatti@...hat.com>,
Srikar <srikar@...ux.vnet.ibm.com>,
Srivatsa Vaddagiri <vatsa@...ux.vnet.ibm.com>,
Peter Zijlstra <peterz@...radead.org>,
"Nikunj A. Dadhania" <nikunj@...ux.vnet.ibm.com>,
KVM <kvm@...r.kernel.org>, LKML <linux-kernel@...r.kernel.org>,
Gleb Natapov <gleb@...hat.com>,
Jeremy Fitzhardinge <jeremy@...p.org>,
Avi Kivity <avi@...hat.com>, Ingo Molnar <mingo@...hat.com>
Subject: RE: [PATCH] kvm: handle last_boosted_vcpu = 0 case
Hello,
I am just catching up on this email thread...
Perhaps one of you may be able to help answer this query.. preferably along with some data. [BTW, I do understand the basic intent behind PLE in a typical [sweet spot] use case where there is over subscription etc. and the need to optimize the PLE handler in the host etc. ]
In a use case where the host has fewer but much larger guests (say 40VCPUs and higher) and there is no over subscription (i.e. # of vcpus across guests <= physical cpus in the host and perhaps each guest has their vcpu's pinned to specific physical cpus for other reasons), I would like to understand if/how the PLE really helps ? For these use cases would it be ok to turn PLE off (ple_gap=0) since is no real need to take an exit and find some other VCPU to yield to ?
Thanks
Vinod
-----Original Message-----
From: Raghavendra K T [mailto:raghavendra.kt@...ux.vnet.ibm.com]
Sent: Thursday, June 28, 2012 9:22 AM
To: Andrew Jones
Cc: Rik van Riel; Marcelo Tosatti; Srikar; Srivatsa Vaddagiri; Peter Zijlstra; Nikunj A. Dadhania; KVM; LKML; Gleb Natapov; Vinod, Chegu; Jeremy Fitzhardinge; Avi Kivity; Ingo Molnar
Subject: Re: [PATCH] kvm: handle last_boosted_vcpu = 0 case
On 06/28/2012 09:30 PM, Andrew Jones wrote:
>
>
> ----- Original Message -----
>> In summary, current PV has huge benefit on non-PLE machine.
>>
>> On PLE machine, the results become very sensitive to load, type of
>> workload and SPIN_THRESHOLD. Also PLE interference has significant
>> effect on them. But still it has slight edge over non PV.
>>
>
> Hi Raghu,
>
> sorry for my slow response. I'm on vacation right now (until the 9th
> of July) and I have limited access to mail.
Ok. Happy Vacation :)
Also, thanks for
> continuing the benchmarking. Question, when you compare PLE vs.
> non-PLE, are you using different machines (one with and one without),
> or are you disabling its use by loading the kvm module with the
> ple_gap=0 modparam as I did?
Yes, I am doing the same when I say with PLE disabled and comparing the benchmarks (i.e loading kvm module with ple_gap=0).
But older non-PLE results were on a different machine altogether. (I had limited access to PLE machine).
Powered by blists - more mailing lists