[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <4F73568D.7000703@linux.vnet.ibm.com>
Date: Wed, 28 Mar 2012 23:51:01 +0530
From: Raghavendra K T <raghavendra.kt@...ux.vnet.ibm.com>
To: Alan Meadows <alan.meadows@...il.com>, Avi Kivity <avi@...hat.com>
CC: "H. Peter Anvin" <hpa@...or.com>, Ingo Molnar <mingo@...e.hu>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Peter Zijlstra <peterz@...radead.org>,
the arch/x86 maintainers <x86@...nel.org>,
LKML <linux-kernel@...r.kernel.org>,
Marcelo Tosatti <mtosatti@...hat.com>,
KVM <kvm@...r.kernel.org>, Andi Kleen <andi@...stfloor.org>,
Xen Devel <xen-devel@...ts.xensource.com>,
Konrad Rzeszutek Wilk <konrad.wilk@...cle.com>,
Virtualization <virtualization@...ts.linux-foundation.org>,
Jeremy Fitzhardinge <jeremy.fitzhardinge@...rix.com>,
Stephan Diestelhorst <stephan.diestelhorst@....com>,
Srivatsa Vaddagiri <vatsa@...ux.vnet.ibm.com>,
Stefano Stabellini <stefano.stabellini@...citrix.com>,
Attilio Rao <attilio.rao@...rix.com>
Subject: Re: [PATCH RFC V6 0/11] Paravirtualized ticketlocks
On 03/28/2012 09:39 PM, Alan Meadows wrote:
> I am happy to see this issue receiving some attention and second the
> wish to see these patches be considered for further review and inclusion
> in an upcoming release.
>
> Overcommit is not as common in enterprise and single-tenant virtualized
> environments as it is in multi-tenant environments, and frankly we have
> been suffering.
>
> We have been running an early copy of these patches in our lab and in a
> small production node sample set both on3.2.0-rc4 and 3.3.0-rc6 for over
> two weeks now with great success. With the heavy level of vCPU:pCPU
> overcommit required for our situation, the patches are increasing
> performance by an _order of magnitude_ on our E5645 and E5620 systems.
>
Thanks Alan for the support. I feel timing of this patch was little bad
though. (merge window)
>
>
> Looks like a good baseline on which to build the KVM
> implementation. We
> might need some handshake to prevent interference on the host
> side with
> the PLE code.
>
I think I still missed some point in Avi's comment. I agree that PLE
may be interfering with above patches (resulting in less performance
advantages). but we have not seen performance degradation with the
patches in earlier benchmarks. [ theoretically since patch has very
slight advantage over PLE that atleast it knows who should run next ].
So TODO in my list on this is:
1. More analysis of performance on PLE mc.
2. Seeing how to implement handshake to increase performance (if PLE +
patch combination have slight negative effect).
Sorry that, I could not do more analysis on PLE (as promised last time)
because of machine availability.
I 'll do some work on this and comeback. But in the meantime, I do not
see it as blocking for next merge window.
>
> Avi, Thanks for reviewing. True, it is sort of equivalent to PLE on
> non PLE machine.
>
> Ingo, Peter,
> Can you please let us know if this series can be considered for next
> merge window?
> OR do you still have some concerns that needs addressing.
>
> I shall rebase patches to 3.3 and resend. (main difference would be
> UNINLINE_SPIN_UNLOCK and jump label changes to use
> static_key_true/false() usage instead of static_branch.)
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists