[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4A0343F5.5070509@gmail.com>
Date: Thu, 07 May 2009 16:26:29 -0400
From: Gregory Haskins <gregory.haskins@...il.com>
To: Avi Kivity <avi@...hat.com>
CC: Gregory Haskins <ghaskins@...ell.com>,
Chris Wright <chrisw@...s-sol.org>,
linux-kernel@...r.kernel.org, kvm@...r.kernel.org,
Anthony Liguori <anthony@...emonkey.ws>
Subject: Re: [RFC PATCH 0/3] generic hypercall support
Avi Kivity wrote:
> Gregory Haskins wrote:
>>> Oh yes. But don't call it dynhc - like Chris says it's the wrong
>>> semantic.
>>>
>>> Since we want to connect it to an eventfd, call it HC_NOTIFY or
>>> HC_EVENT or something along these lines. You won't be able to pass
>>> any data, but that's fine. Registers are saved to memory anyway.
>>>
>> Ok, but how would you access the registers since you would presumably
>> only be getting a waitq::func callback on the eventfd. Or were you
>> saying that more data, if required, is saved in a side-band memory
>> location? I can see the latter working.
>
> Yeah. You basically have that side-band in vbus shmem (or the virtio
> ring).
Ok, got it.
>
>> I can't wrap my head around
>> the former.
>>
>
> I only meant that registers aren't faster than memory, since they are
> just another memory location.
>
> In fact registers are accessed through a function call (not that that
> takes any time these days).
>
>
>>> Just to make sure we have everything plumbed down, here's how I see
>>> things working out (using qemu and virtio, use sed to taste):
>>>
>>> 1. qemu starts up, sets up the VM
>>> 2. qemu creates virtio-net-server
>>> 3. qemu allocates six eventfds: irq, stopirq, notify (one set for tx
>>> ring, one set for rx ring)
>>> 4. qemu connects the six eventfd to the data-available,
>>> data-not-available, and kick ports of virtio-net-server
>>> 5. the guest starts up and configures virtio-net in pci pin mode
>>> 6. qemu notices and decides it will manage interrupts in user space
>>> since this is complicated (shared level triggered interrupts)
>>> 7. the guest OS boots, loads device driver
>>> 8. device driver switches virtio-net to msix mode
>>> 9. qemu notices, plumbs the irq fds as msix interrupts, plumbs the
>>> notify fds as notifyfd
>>> 10. look ma, no hands.
>>>
>>> Under the hood, the following takes place.
>>>
>>> kvm wires the irqfds to schedule a work item which fires the
>>> interrupt. One day the kvm developers get their act together and
>>> change it to inject the interrupt directly when the irqfd is signalled
>>> (which could be from the net softirq or somewhere similarly nasty).
>>>
>>> virtio-net-server wires notifyfd according to its liking. It may
>>> schedule a thread, or it may execute directly.
>>>
>>> And they all lived happily ever after.
>>>
>>
>> Ack. I hope when its all said and done I can convince you that the
>> framework to code up those virtio backends in the kernel is vbus ;)
>
> If vbus doesn't bring significant performance advantages, I'll prefer
> virtio because of existing investment.
Just to clarify: vbus is just the container/framework for the in-kernel
models. You can implement and deploy virtio devices inside the
container (tho I haven't had a chance to sit down and implement one
yet). Note that I did publish a virtio transport in the last few series
to demonstrate how that might work, so its just ripe for the picking if
someone is so inclined.
So really the question is whether you implement the in-kernel virtio
backend in vbus, in some other framework, or just do it standalone.
-Greg
Download attachment "signature.asc" of type "application/pgp-signature" (267 bytes)
Powered by blists - more mailing lists