[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4A086590.4040602@codemonkey.ws>
Date: Mon, 11 May 2009 12:51:12 -0500
From: Anthony Liguori <anthony@...emonkey.ws>
To: Avi Kivity <avi@...hat.com>
CC: Hollis Blanchard <hollisb@...ibm.com>,
Gregory Haskins <gregory.haskins@...il.com>,
Gregory Haskins <ghaskins@...ell.com>,
Chris Wright <chrisw@...s-sol.org>,
linux-kernel@...r.kernel.org, kvm@...r.kernel.org
Subject: Re: [RFC PATCH 0/3] generic hypercall support
Avi Kivity wrote:
> Hollis Blanchard wrote:
>> I haven't been following this conversation at all. With that in mind...
>>
>> AFAICS, a hypercall is clearly the higher-performing option, since you
>> don't need the additional memory load (which could even cause a page
>> fault in some circumstances) and instruction decode. That said, I'm
>> willing to agree that this overhead is probably negligible compared to
>> the IOp itself... Ahmdal's Law again.
>>
>
> It's a question of cost vs. benefit. It's clear the benefit is low
> (but that doesn't mean it's not worth having). The cost initially
> appeared to be very low, until the nested virtualization wrench was
> thrown into the works. Not that nested virtualization is a reality --
> even on svm where it is implemented it is not yet production quality
> and is disabled by default.
>
> Now nested virtualization is beginning to look interesting, with
> Windows 7's XP mode requiring virtualization extensions. Desktop
> virtualization is also something likely to use device assignment
> (though you probably won't assign a virtio device to the XP instance
> inside Windows 7).
>
> Maybe we should revisit the mmio hypercall idea again, it might be
> workable if we find a way to let the guest know if it should use the
> hypercall or not for a given memory range.
>
> mmio hypercall is nice because
> - it falls back nicely to pure mmio
> - it optimizes an existing slow path, not just new device models
> - it has preexisting semantics, so we have less ABI to screw up
> - for nested virtualization + device assignment, we can drop it and
> get a nice speed win (or rather, less speed loss)
If it's a PCI device, then we can also have an interrupt which we
currently lack with vmcall-based hypercalls. This would give us
guestcalls, upcalls, or whatever we've previously decided to call them.
Regards,
Anthony Liguori
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists