lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4FB0BE4A.6060604@linux.vnet.ibm.com>
Date:	Mon, 14 May 2012 13:41:54 +0530
From:	Raghavendra K T <raghavendra.kt@...ux.vnet.ibm.com>
To:	Jeremy Fitzhardinge <jeremy@...p.org>
CC:	Avi Kivity <avi@...hat.com>,
	Srivatsa Vaddagiri <vatsa@...ux.vnet.ibm.com>,
	Ingo Molnar <mingo@...nel.org>,
	Linus Torvalds <torvalds@...ux-foundation.org>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Greg Kroah-Hartman <gregkh@...e.de>,
	Konrad Rzeszutek Wilk <konrad.wilk@...cle.com>,
	"H. Peter Anvin" <hpa@...or.com>,
	Marcelo Tosatti <mtosatti@...hat.com>, X86 <x86@...nel.org>,
	Gleb Natapov <gleb@...hat.com>, Ingo Molnar <mingo@...hat.com>,
	Attilio Rao <attilio.rao@...rix.com>,
	Virtualization <virtualization@...ts.linux-foundation.org>,
	Xen Devel <xen-devel@...ts.xensource.com>,
	linux-doc@...r.kernel.org, KVM <kvm@...r.kernel.org>,
	Andi Kleen <andi@...stfloor.org>,
	Stefano Stabellini <stefano.stabellini@...citrix.com>,
	Stephan Diestelhorst <stephan.diestelhorst@....com>,
	LKML <linux-kernel@...r.kernel.org>,
	Peter Zijlstra <a.p.zijlstra@...llo.nl>,
	Thomas Gleixner <tglx@...utronix.de>,
	"Nikunj A. Dadhania" <nikunj@...ux.vnet.ibm.com>
Subject: Re: [PATCH RFC V8 0/17] Paravirtualized ticket spinlocks

On 05/14/2012 01:08 PM, Jeremy Fitzhardinge wrote:
> On 05/13/2012 11:45 AM, Raghavendra K T wrote:
>> On 05/07/2012 08:22 PM, Avi Kivity wrote:
>>
>> I could not come with pv-flush results (also Nikunj had clarified that
>> the result was on NOn PLE
>>
>>> I'd like to see those numbers, then.
>>>
>>> Ingo, please hold on the kvm-specific patches, meanwhile.
>>>
>>
>> 3 guests 8GB RAM, 1 used for kernbench
>> (kernbench -f -H -M -o 20) other for cpuhog (shell script with  while
>> true do hackbench)
>>
>> 1x: no hogs
>> 2x: 8hogs in one guest
>> 3x: 8hogs each in two guest
>>
>> kernbench on PLE:
>> Machine : IBM xSeries with Intel(R) Xeon(R)  X7560 2.27GHz CPU with 32
>> core, with 8 online cpus and 4*64GB RAM.
>>
>> The average is taken over 4 iterations with 3 run each (4*3=12). and
>> stdev is calculated over mean reported in each run.
>>
>>
>> A): 8 vcpu guest
>>
>>                   BASE                    BASE+patch %improvement w.r.t
>>                   mean (sd)               mean (sd)
>> patched kernel time
>> case 1*1x:    61.7075  (1.17872)    60.93     (1.475625)    1.27605
>> case 1*2x:    107.2125 (1.3821349)    97.506675 (1.3461878)   9.95401
>> case 1*3x:    144.3515 (1.8203927)    138.9525  (0.58309319)  3.8855
>>
>>
>> B): 16 vcpu guest
>>                   BASE                    BASE+patch %improvement w.r.t
>>                   mean (sd)               mean (sd)
>> patched kernel time
>> case 2*1x:    70.524   (1.5941395)    69.68866  (1.9392529)   1.19867
>> case 2*2x:    133.0738 (1.4558653)    124.8568  (1.4544986)   6.58114
>> case 2*3x:    206.0094 (1.3437359)    181.4712  (2.9134116)   13.5218
>>
>> B): 32 vcpu guest
>>                   BASE                    BASE+patch %improvementw.r.t
>>                   mean (sd)               mean (sd)
>> patched kernel time
>> case 4*1x:    100.61046 (2.7603485)     85.48734  (2.6035035)  17.6905
>
> What does the "4*1x" notation mean? Do these workloads have overcommit
> of the PCPU resources?
>
> When I measured it, even quite small amounts of overcommit lead to large
> performance drops with non-pv ticket locks (on the order of 10%
> improvements when there were 5 busy VCPUs on a 4 cpu system).  I never
> tested it on larger machines, but I guess that represents around 25%
> overcommit, or 40 busy VCPUs on a 32-PCPU system.

All the above measurements are on PLE machine. It is 32 vcpu single
guest on a 8 pcpu.

(PS:One problem I saw in my kernbench run itself is that
number of threads spawned = 20 instead of 2* number of vcpu. I ll
correct during next measurement.)

"even quite small amounts of overcommit lead to large performance drops
with non-pv ticket locks":

This is very much true on non PLE machine. probably compilation takes
even a day vs just one hour. ( with just 1:3x overcommit I had got 25 x
speedup).

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ