[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <50810D0E.9090700@linux.vnet.ibm.com>
Date: Fri, 19 Oct 2012 13:49:26 +0530
From: Raghavendra K T <raghavendra.kt@...ux.vnet.ibm.com>
To: Avi Kivity <avi@...hat.com>
CC: "Andrew M. Theurer" <habanero@...ux.vnet.ibm.com>,
Peter Zijlstra <peterz@...radead.org>,
Rik van Riel <riel@...hat.com>,
"H. Peter Anvin" <hpa@...or.com>, Ingo Molnar <mingo@...hat.com>,
Marcelo Tosatti <mtosatti@...hat.com>,
Srikar <srikar@...ux.vnet.ibm.com>,
"Nikunj A. Dadhania" <nikunj@...ux.vnet.ibm.com>,
KVM <kvm@...r.kernel.org>, Jiannan Ouyang <ouyang@...pitt.edu>,
chegu vinod <chegu_vinod@...com>,
LKML <linux-kernel@...r.kernel.org>,
Srivatsa Vaddagiri <srivatsa.vaddagiri@...il.com>,
Gleb Natapov <gleb@...hat.com>,
Andrew Jones <drjones@...hat.com>
Subject: Re: [PATCH RFC 1/2] kvm: Handle undercommitted guest case in PLE
handler
On 10/18/2012 06:09 PM, Avi Kivity wrote:
> On 10/09/2012 08:51 PM, Raghavendra K T wrote:
>> Here is the summary:
>> We do get good benefit by increasing ple window. Though we don't
>> see good benefit for kernbench and sysbench, for ebizzy, we get huge
>> improvement for 1x scenario. (almost 2/3rd of ple disabled case).
>>
>> Let me know if you think we can increase the default ple_window
>> itself to 16k.
>>
>
> I think so, there is no point running with untuned defaults.
>
Oaky.
>>
>> I can respin the whole series including this default ple_window change.
>
> It can come as a separate patch.
Yes. Will spin it separately.
>
>>
>> I also have the perf kvm top result for both ebizzy and kernbench.
>> I think they are in expected lines now.
>>
>> Improvements
>> ================
>>
>> 16 core PLE machine with 16 vcpu guest
>>
>> base = 3.6.0-rc5 + ple handler optimization patches
>> base_pleopt_16k = base + ple_window = 16k
>> base_pleopt_32k = base + ple_window = 32k
>> base_pleopt_nople = base + ple_gap = 0
>> kernbench, hackbench, sysbench (time in sec lower is better)
>> ebizzy (rec/sec higher is better)
>>
>> % improvements w.r.t base (ple_window = 4k)
>> ---------------+---------------+-----------------+-------------------+
>> |base_pleopt_16k| base_pleopt_32k | base_pleopt_nople |
>> ---------------+---------------+-----------------+-------------------+
>> kernbench_1x | 0.42371 | 1.15164 | 0.09320 |
>> kernbench_2x | -1.40981 | -17.48282 | -570.77053 |
>> ---------------+---------------+-----------------+-------------------+
>> sysbench_1x | -0.92367 | 0.24241 | -0.27027 |
>> sysbench_2x | -2.22706 |-0.30896 | -1.27573 |
>> sysbench_3x | -0.75509 | 0.09444 | -2.97756 |
>> ---------------+---------------+-----------------+-------------------+
>> ebizzy_1x | 54.99976 | 67.29460 | 74.14076 |
>> ebizzy_2x | -8.83386 |-27.38403 | -96.22066 |
>> ---------------+---------------+-----------------+-------------------+
>
> So it seems we want dynamic PLE windows. As soon as we enter overcommit
> we need to decrease the window.
>
Okay.
I have some rough idea on the implementation. I 'll try that after this
V2 experiments are over.
So in brief, I have this in my queue priority wise
1) V2 version of this patch series( in progress)
2) default PLE window
3) preemption notifiers
4) Pv spinlock
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists