[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090916133104.GB14725@phenom.dumpdata.com>
Date: Wed, 16 Sep 2009 09:31:04 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@...cle.com>
To: Sheng Yang <sheng@...ux.intel.com>
Cc: Keir Fraser <keir.fraser@...citrix.com>,
Jeremy Fitzhardinge <jeremy.fitzhardinge@...rix.com>,
xen-devel <xen-devel@...ts.xensource.com>,
Eddie Dong <eddie.dong@...el.com>,
linux-kernel@...r.kernel.org, Jun Nakajima <jun.nakajima@...el.com>
Subject: Re: [Xen-devel] [RFC][PATCH 0/10] Xen Hybrid extension support
On Wed, Sep 16, 2009 at 04:42:21PM +0800, Sheng Yang wrote:
> Hi, Keir & Jeremy
>
> This patchset enabled Xen Hybrid extension support.
>
> As we know that PV guest have performance issue with x86_64 that guest kernel
> and userspace resistent in the same ring, then the necessary TLB flushes when
> switch between guest userspace and guest kernel cause overhead, and much more
> syscall overhead is also introduced. The Hybrid Extension estimated these
> overhead by putting guest kernel back in (non-root) ring0 then achieve the better
> performance than PV guest.
What was the overhead? Is there a step-by-step list of operations you did
to figure out the performance numbers?
I am asking this b/c at some point I would like to compare the pv-ops vs native
and I am not entirely sure what is the best way to do this.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists