lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 17 Sep 2009 16:59:47 +0800
From:	Sheng Yang <sheng@...ux.intel.com>
To:	Konrad Rzeszutek Wilk <konrad.wilk@...cle.com>
Cc:	Keir Fraser <keir.fraser@...citrix.com>,
	Jeremy Fitzhardinge <jeremy.fitzhardinge@...rix.com>,
	"xen-devel" <xen-devel@...ts.xensource.com>,
	Eddie Dong <eddie.dong@...el.com>,
	linux-kernel@...r.kernel.org, Jun Nakajima <jun.nakajima@...el.com>
Subject: Re: [Xen-devel] [RFC][PATCH 0/10] Xen Hybrid extension support

On Wednesday 16 September 2009 21:31:04 Konrad Rzeszutek Wilk wrote:
> On Wed, Sep 16, 2009 at 04:42:21PM +0800, Sheng Yang wrote:
> > Hi, Keir & Jeremy
> >
> > This patchset enabled Xen Hybrid extension support.
> >
> > As we know that PV guest have performance issue with x86_64 that guest
> > kernel and userspace resistent in the same ring, then the necessary TLB
> > flushes when switch between guest userspace and guest kernel cause
> > overhead, and much more syscall overhead is also introduced. The Hybrid
> > Extension estimated these overhead by putting guest kernel back in
> > (non-root) ring0 then achieve the better performance than PV guest.
>
> What was the overhead? Is there a step-by-step list of operations you did
> to figure out the performance numbers?

The overhead I mentioned is, in x86_64 pv guest, every syscall would be goes 
to hypervisor first, then hypervisor transmit it to guest kernel, finally 
guest kernel goes back to guest userspace. Due to the involvement of 
hypervisor, there is certainly overhead. And every transition result in TLB 
flush. In 32bit pv guest, guest use #int82 to emulate syscall, which can 
specific the privilege level, so that hypervisor don't need involve. 

And sorry, I don't have a step-by-step list for the performance tunning. All 
above is a known issue of x86_64 pv guest.
>
> I am asking this b/c at some point I would like to compare the pv-ops vs
> native and I am not entirely sure what is the best way to do this.

Sorry, I don't have much advise on this. If you means tuning, what I can 
purposed is just running some microbenchmark(lmbench is a favor of mine), 
collect (guest) hot function with xenoprofile and compare the result of native 
and pv-ops to figure out the gap...

-- 
regards
Yang, Sheng
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ