[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4A033F6E.3010604@novell.com>
Date: Thu, 07 May 2009 16:07:10 -0400
From: Gregory Haskins <ghaskins@...ell.com>
To: Avi Kivity <avi@...hat.com>
CC: Gregory Haskins <gregory.haskins@...il.com>,
Chris Wright <chrisw@...s-sol.org>,
linux-kernel@...r.kernel.org, kvm@...r.kernel.org,
Anthony Liguori <anthony@...emonkey.ws>
Subject: Re: [RFC PATCH 0/3] generic hypercall support
Avi Kivity wrote:
> Gregory Haskins wrote:
>>> Don't - it's broken. It will also catch device assignment mmio and
>>> hypercall them.
>>>
>>>
>> Ah. Crap.
>>
>> Would you be conducive if I continue along with the dynhc() approach
>> then?
>>
>
> Oh yes. But don't call it dynhc - like Chris says it's the wrong
> semantic.
>
> Since we want to connect it to an eventfd, call it HC_NOTIFY or
> HC_EVENT or something along these lines. You won't be able to pass
> any data, but that's fine. Registers are saved to memory anyway.
Ok, but how would you access the registers since you would presumably
only be getting a waitq::func callback on the eventfd. Or were you
saying that more data, if required, is saved in a side-band memory
location? I can see the latter working. I can't wrap my head around
the former.
>
> And btw, given that eventfd and the underlying infrastructure are so
> flexible, it's probably better to go back to your original "irqfd gets
> fd from userspace" just to be consistent everywhere.
>
> (no, I'm not deliberately making you rewrite that patch again and
> again... it's going to be a key piece of infrastructure so I want to
> get it right)
Ok, np. Actually now that Davide showed me the waitq::func trick, the
fd technically doesn't even need to be an eventfd per se. We can just
plain-old "fget()" it and attach via the f_ops->poll() as I do in v5.
Ill submit this later today.
>
>
> Just to make sure we have everything plumbed down, here's how I see
> things working out (using qemu and virtio, use sed to taste):
>
> 1. qemu starts up, sets up the VM
> 2. qemu creates virtio-net-server
> 3. qemu allocates six eventfds: irq, stopirq, notify (one set for tx
> ring, one set for rx ring)
> 4. qemu connects the six eventfd to the data-available,
> data-not-available, and kick ports of virtio-net-server
> 5. the guest starts up and configures virtio-net in pci pin mode
> 6. qemu notices and decides it will manage interrupts in user space
> since this is complicated (shared level triggered interrupts)
> 7. the guest OS boots, loads device driver
> 8. device driver switches virtio-net to msix mode
> 9. qemu notices, plumbs the irq fds as msix interrupts, plumbs the
> notify fds as notifyfd
> 10. look ma, no hands.
>
> Under the hood, the following takes place.
>
> kvm wires the irqfds to schedule a work item which fires the
> interrupt. One day the kvm developers get their act together and
> change it to inject the interrupt directly when the irqfd is signalled
> (which could be from the net softirq or somewhere similarly nasty).
>
> virtio-net-server wires notifyfd according to its liking. It may
> schedule a thread, or it may execute directly.
>
> And they all lived happily ever after.
Ack. I hope when its all said and done I can convince you that the
framework to code up those virtio backends in the kernel is vbus ;) But
even if not, this should provide enough plumbing that we can all coexist
together peacefully.
Thanks,
-Greg
Download attachment "signature.asc" of type "application/pgp-signature" (267 bytes)
Powered by blists - more mailing lists