[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4A08661C.1000208@redhat.com>
Date: Mon, 11 May 2009 20:53:32 +0300
From: Avi Kivity <avi@...hat.com>
To: Gregory Haskins <gregory.haskins@...il.com>
CC: Hollis Blanchard <hollisb@...ibm.com>,
Anthony Liguori <anthony@...emonkey.ws>,
Gregory Haskins <ghaskins@...ell.com>,
Chris Wright <chrisw@...s-sol.org>,
linux-kernel@...r.kernel.org, kvm@...r.kernel.org
Subject: Re: [RFC PATCH 0/3] generic hypercall support
Gregory Haskins wrote:
> Avi Kivity wrote:
>
>> Hollis Blanchard wrote:
>>
>>> I haven't been following this conversation at all. With that in mind...
>>>
>>> AFAICS, a hypercall is clearly the higher-performing option, since you
>>> don't need the additional memory load (which could even cause a page
>>> fault in some circumstances) and instruction decode. That said, I'm
>>> willing to agree that this overhead is probably negligible compared to
>>> the IOp itself... Ahmdal's Law again.
>>>
>>>
>> It's a question of cost vs. benefit. It's clear the benefit is low
>> (but that doesn't mean it's not worth having). The cost initially
>> appeared to be very low, until the nested virtualization wrench was
>> thrown into the works. Not that nested virtualization is a reality --
>> even on svm where it is implemented it is not yet production quality
>> and is disabled by default.
>>
>> Now nested virtualization is beginning to look interesting, with
>> Windows 7's XP mode requiring virtualization extensions. Desktop
>> virtualization is also something likely to use device assignment
>> (though you probably won't assign a virtio device to the XP instance
>> inside Windows 7).
>>
>> Maybe we should revisit the mmio hypercall idea again, it might be
>> workable if we find a way to let the guest know if it should use the
>> hypercall or not for a given memory range.
>>
>> mmio hypercall is nice because
>> - it falls back nicely to pure mmio
>> - it optimizes an existing slow path, not just new device models
>> - it has preexisting semantics, so we have less ABI to screw up
>> - for nested virtualization + device assignment, we can drop it and
>> get a nice speed win (or rather, less speed loss)
>>
>>
> Yeah, I agree with all this. I am still wrestling with how to deal with
> the device-assignment problem w.r.t. shunting io requests into a
> hypercall vs letting them PF. Are you saying we could simply ignore
> this case by disabling "MMIOoHC" when assignment is enabled? That would
> certainly make the problem much easier to solve.
>
No, we need to deal with hotplug. Something like IO_COND that Chris
mentioned, but how to avoid turning this into a doctoral thesis.
(On the other hand, device assignment requires the iommu, and I think
you have to specify that up front?)
--
Do not meddle in the internals of kernels, for they are subtle and quick to panic.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists