[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20091223195413.GB30700@ovro.caltech.edu>
Date: Wed, 23 Dec 2009 11:54:13 -0800
From: "Ira W. Snyder" <iws@...o.caltech.edu>
To: Anthony Liguori <anthony@...emonkey.ws>
Cc: Kyle Moffett <kyle@...fetthome.net>,
Gregory Haskins <gregory.haskins@...il.com>,
kvm@...r.kernel.org, netdev@...r.kernel.org,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"alacrityvm-devel@...ts.sourceforge.net"
<alacrityvm-devel@...ts.sourceforge.net>,
Avi Kivity <avi@...hat.com>, Ingo Molnar <mingo@...e.hu>,
torvalds@...ux-foundation.org,
Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: [Alacrityvm-devel] [GIT PULL] AlacrityVM guest drivers for
2.6.33
On Wed, Dec 23, 2009 at 09:09:21AM -0600, Anthony Liguori wrote:
> On 12/23/2009 12:15 AM, Kyle Moffett wrote:
> > This is actually something that is of particular interest to me. I
> > have a few prototype boards right now with programmable PCI-E
> > host/device links on them; one of my long-term plans is to finagle
> > vbus into providing multiple "virtual" devices across that single
> > PCI-E interface.
> >
> > Specifically, I want to be able to provide virtual NIC(s), serial
> > ports and serial consoles, virtual block storage, and possibly other
> > kinds of interfaces. My big problem with existing virtio right now
> > (although I would be happy to be proven wrong) is that it seems to
> > need some sort of out-of-band communication channel for setting up
> > devices, not to mention it seems to need one PCI device per virtual
> > device.
>
> We've been thinking about doing a virtio-over-IP mechanism such that you
> could remote the entire virtio bus to a separate physical machine.
> virtio-over-IB is probably more interesting since you can make use of
> RDMA. virtio-over-PCI-e would work just as well.
>
I didn't know you were interested in this as well. See my later reply to
Kyle for a lot of code that I've written with this in mind.
> virtio is a layered architecture. Device enumeration/discovery sits at
> a lower level than the actual device ABIs. The device ABIs are
> implemented on top of a bulk data transfer API. The reason for this
> layering is so that we can reuse PCI as an enumeration/discovery
> mechanism. This tremendenously simplifies porting drivers to other OSes
> and let's us use PCI hotplug automatically. We get integration into all
> the fancy userspace hotplug support for free.
>
> But both virtio-lguest and virtio-s390 use in-band enumeration and
> discovery since they do not have support for PCI on either platform.
>
I'm interested in the same thing, just over PCI. The only PCI agent
systems I've used are not capable of manipulating the PCI configuration
space in such a way that virtio-pci is usable on them. This means
creating your own enumeration mechanism. Which sucks. See my virtio-phys
code (http://www.mmarray.org/~iws/virtio-phys/) for an example of how I
did it. It was modeled on lguest. Help is appreciated.
Ira
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists