[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4B3254B4.2080602@gmail.com>
Date: Wed, 23 Dec 2009 12:34:44 -0500
From: Gregory Haskins <gregory.haskins@...il.com>
To: Kyle Moffett <kyle@...fetthome.net>
CC: Ingo Molnar <mingo@...e.hu>, Avi Kivity <avi@...hat.com>,
kvm@...r.kernel.org, Andrew Morton <akpm@...ux-foundation.org>,
torvalds@...ux-foundation.org,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
netdev@...r.kernel.org,
"alacrityvm-devel@...ts.sourceforge.net"
<alacrityvm-devel@...ts.sourceforge.net>,
"Ira W. Snyder" <iws@...o.caltech.edu>
Subject: Re: [GIT PULL] AlacrityVM guest drivers for 2.6.33
On 12/23/09 1:15 AM, Kyle Moffett wrote:
> On Tue, Dec 22, 2009 at 12:36, Gregory Haskins
> <gregory.haskins@...il.com> wrote:
>> On 12/22/09 2:57 AM, Ingo Molnar wrote:
>>> * Gregory Haskins <gregory.haskins@...il.com> wrote:
>>>> Actually, these patches have nothing to do with the KVM folks. [...]
>>>
>>> That claim is curious to me - the AlacrityVM host
>>
>> It's quite simple, really. These drivers support accessing vbus, and
>> vbus is hypervisor agnostic. In fact, vbus isn't necessarily even
>> hypervisor related. It may be used anywhere where a Linux kernel is the
>> "io backend", which includes hypervisors like AlacrityVM, but also
>> userspace apps, and interconnected physical systems as well.
>>
>> The vbus-core on the backend, and the drivers on the frontend operate
>> completely independent of the underlying hypervisor. A glue piece
>> called a "connector" ties them together, and any "hypervisor" specific
>> details are encapsulated in the connector module. In this case, the
>> connector surfaces to the guest side as a pci-bridge, so even that is
>> not hypervisor specific per se. It will work with any pci-bridge that
>> exposes a compatible ABI, which conceivably could be actual hardware.
>
> This is actually something that is of particular interest to me. I
> have a few prototype boards right now with programmable PCI-E
> host/device links on them; one of my long-term plans is to finagle
> vbus into providing multiple "virtual" devices across that single
> PCI-E interface.
>
> Specifically, I want to be able to provide virtual NIC(s), serial
> ports and serial consoles, virtual block storage, and possibly other
> kinds of interfaces. My big problem with existing virtio right now
> (although I would be happy to be proven wrong) is that it seems to
> need some sort of out-of-band communication channel for setting up
> devices, not to mention it seems to need one PCI device per virtual
> device.
>
> So I would love to be able to port something like vbus to my nify PCI
> hardware and write some backend drivers... then my PCI-E connected
> systems would dynamically provide a list of highly-efficient "virtual"
> devices to each other, with only one 4-lane PCI-E bus.
Hi Kyle,
We indeed have others that are doing something similar. I have CC'd Ira
who may be able to provide you more details. I would also point you at
the canonical example for what you would need to write to tie your
systems together. Its the "null connector", which you can find here:
http://git.kernel.org/?p=linux/kernel/git/ghaskins/alacrityvm/linux-2.6.git;a=blob;f=kernel/vbus/connectors/null.c;h=b6d16cb68b7e49e07528278bc9f5b73e1dac0c2f;hb=HEAD
Do not hesitate to ask any questions, though you may want to take the
conversation to the alacrityvm-devel list as to not annoy the current CC
list any further than I already have ;)
Kind Regards,
-Greg
Download attachment "signature.asc" of type "application/pgp-signature" (268 bytes)
Powered by blists - more mailing lists