[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4A044786.2080508@codemonkey.ws>
Date: Fri, 08 May 2009 09:53:58 -0500
From: Anthony Liguori <anthony@...emonkey.ws>
To: Gregory Haskins <ghaskins@...ell.com>
CC: Marcelo Tosatti <mtosatti@...hat.com>,
Chris Wright <chrisw@...s-sol.org>,
Gregory Haskins <gregory.haskins@...il.com>,
Avi Kivity <avi@...hat.com>, linux-kernel@...r.kernel.org,
kvm@...r.kernel.org
Subject: Re: [RFC PATCH 0/3] generic hypercall support
Gregory Haskins wrote:
>> Greg,
>>
>> I think comparison is not entirely fair.
>>
>
> <snip>
>
> FYI: I've update the test/wiki to (hopefully) address your concerns.
>
> http://developer.novell.com/wiki/index.php/WhyHypercalls
>
And we're now getting close to the point where the difference is
virtually meaningless.
At .14us, in order to see 1% CPU overhead added from PIO vs HC, you need
71429 exits.
If you have this many exits, the shear cost of the base vmexit overhead
is going to result in about 15% CPU overhead. To put this another way,
if you're workload was entirely bound by vmexits (which is virtually
impossible), then when you were saturating your CPU at 100%, only 7% of
that is the cost of PIO exits vs. HC.
In real life workloads, if you're paying 15% overhead just to the cost
of exits (not including the cost of heavy weight or post-exit
processing), you're toast. I think it's going to be very difficult to
construct a real scenario where you'll have a measurable (i.e. > 1%)
performance overhead from using PIO vs. HC.
And in the absence of that, I don't see the justification for adding
additional infrastructure to Linux to support this.
The non-x86 architecture argument isn't valid because other
architectures either 1) don't use PCI at all (s390) and are already
using hypercalls 2) use PCI, but do not have a dedicated hypercall
instruction (PPC emb) or 3) have PIO (ia64).
Regards,
Anthony Liguori
> Regards,
> -Greg
>
>
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists