lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 23 Dec 2009 15:42:56 -0800
From:	"Ira W. Snyder" <iws@...o.caltech.edu>
To:	Anthony Liguori <anthony@...emonkey.ws>
Cc:	Kyle Moffett <kyle@...fetthome.net>,
	Gregory Haskins <gregory.haskins@...il.com>,
	kvm@...r.kernel.org, netdev@...r.kernel.org,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	"alacrityvm-devel@...ts.sourceforge.net" 
	<alacrityvm-devel@...ts.sourceforge.net>,
	Avi Kivity <avi@...hat.com>, Ingo Molnar <mingo@...e.hu>,
	torvalds@...ux-foundation.org,
	Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: [Alacrityvm-devel] [GIT PULL] AlacrityVM guest drivers for
 2.6.33

On Wed, Dec 23, 2009 at 04:58:37PM -0600, Anthony Liguori wrote:
> On 12/23/2009 01:54 PM, Ira W. Snyder wrote:
> > On Wed, Dec 23, 2009 at 09:09:21AM -0600, Anthony Liguori wrote:
> 
> > I didn't know you were interested in this as well. See my later reply to
> > Kyle for a lot of code that I've written with this in mind.
> 
> 
> BTW, in the future, please CC me or CC 
> virtualization@...ts.linux-foundation.org.  Or certainly kvm@...r.  I 
> never looked at the virtio-over-pci patchset although I've heard it 
> referenced before.
> 

Will do. I wouldn't think kvm@...r would be on-topic. I'm not interested
in KVM (though I do use it constantly, it is great). I'm only interested
in using virtio as a transport between physical systems. Is it a place
where discussing virtio by itself is on-topic?

> >> But both virtio-lguest and virtio-s390 use in-band enumeration and
> >> discovery since they do not have support for PCI on either platform.
> >>
> >
> > I'm interested in the same thing, just over PCI. The only PCI agent
> > systems I've used are not capable of manipulating the PCI configuration
> > space in such a way that virtio-pci is usable on them.
> 
> virtio-pci is the wrong place to start if you want to use a PCI *device* 
> as the virtio bus. virtio-pci is meant to use the PCI bus as the virtio 
> bus.  That's a very important requirement for us because it maintains 
> the relationship of each device looking like a normal PCI device.
> 
> > This means
> > creating your own enumeration mechanism. Which sucks.
> 
> I don't think it sucks.  The idea is that we don't want to unnecessarily 
> reinvent things.
> 
> Of course, the key feature of virtio is that it makes it possible for 
> you to create your own enumeration mechanism if you're so inclined.
> 
> > See my virtio-phys
> > code (http://www.mmarray.org/~iws/virtio-phys/) for an example of how I
> > did it. It was modeled on lguest. Help is appreciated.
> 
> If it were me, I'd take a much different approach.  I would use a very 
> simple device with a single transmit and receive queue.  I'd create a 
> standard header, and the implement a command protocol on top of it. 
> You'll be able to support zero copy I/O (although you'll have a fixed 
> number of outstanding requests).  You would need a single large ring.
> 
> But then again, I have no idea what your requirements are.  You could 
> probably get far treating the thing as a network device and just doing 
> ATAoE or something like that.
> 

I've got a single PCI Host (master) with ~20 PCI slots. Physically, it
is a backplane in a cPCI chassis, but the form factor is irrelevant. It
is regular PCI from a software perspective.

Into this backplane, I plug up to 20 PCI Agents (slaves). They are
powerpc computers, almost identical to the Freescale MPC8349EMDS board.
They're full-featured powerpc computers, with CPU, RAM, etc. They can
run standalone.

I want to use the PCI backplane as a data transport. Specifically, I
want to transport ethernet over the backplane, so I can have the powerpc
boards mount their rootfs via NFS, etc. Everyone knows how to write
network daemons. It is a good and very well known way to transport data
between systems.

On the PCI bus, the powerpc systems expose 3 PCI BAR's. The size is
configureable, as is the memory location at which they point. What I
cannot do is get notified when a read/write hits the BAR. There is a
feature on the board which allows me to generate interrupts in either
direction: agent->master (PCI INTX) and master->agent (via an MMIO
register). The PCI vendor ID and device ID are not configureable.

One thing I cannot assume is that the PCI master system is capable of
performing DMA. In my system, it is a Pentium3 class x86 machine, which
has no DMA engine. However, the PowerPC systems do have DMA engines. In
virtio terms, it was suggested to make the powerpc systems the "virtio
hosts" (running the backends) and make the x86 (PCI master) the "virtio
guest" (running virtio-net, etc.).

I'm not sure what you're suggesting in the paragraph above. I want to
use virtio-net as the transport, I do not want to write my own
virtual-network driver. Can you please clarify?

Hopefully that explains what I'm trying to do. I'd love someone to help
guide me in the right direction here. I want something to fill this need
in mainline. I've been contacted seperately by 10+ people also looking
for a similar solution. I hunch most of them end up doing what I did:
write a quick-and-dirty network driver. I've been working on this for a
year, just to give an idea.

PS - should I create a new thread on the two mailing lists mentioned
above? I don't want to go too far off-topic in an alacrityvm thread. :)

Ira
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ