lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 4 Aug 2010 16:04:41 -0700
From:	"Ira W. Snyder" <iws@...o.caltech.edu>
To:	"Michael S. Tsirkin" <mst@...hat.com>,
	Rusty Russell <rusty@...tcorp.com.au>
Cc:	virtualization@...ts.linux-foundation.org,
	Zang Roy <r61911@...escale.com>, netdev@...r.kernel.org
Subject: Using virtio as a physical (wire-level) transport

Hello Michael, Rusty,

I'm trying to figure out how to use virtio-net and vhost-net to
communicate over a physical transport (PCI bus) instead of shared
memory (for example, qemu/kvm guest).

We've talked about this several times in the past, and I currently have
some time to devote to this again. I'm trying to figure out if virtio is
still a viable solution, or if it has been evolved such that it is
unusable for this application.

I am trying to create a generic system to allow the  type of
communications described below. I would like to create something that
can be easily ported to any slave computer which meets the following
requirements:

1) it is a PCI slave (agent) (it acts like any other PCI card)
2) it has an inter-processor communications mechanism
3) it has a DMA engine

There is a reasonable amount of demand for such a system. I get
inquiries about the prototype code I posted to linux-netdev at least
once a month. This sort of system is used regularly in the
telecommunications industry, among others.

Here is a quick drawing of the system I work with. Please forgive my
poor ascii art skills.

+-----------------+
| master computer |
|                 |                             +-------------------+
| PCI slot #1     | <-- physical connection --> | slave computer #1 |
| virtio-net if#1 |                             | vhost-net if#1    |
|                 |                             +-------------------+
|                 |
|                 |                             +-------------------+
| PCI slot #2     | <-- physical connection --> | slave computer #2 |
| virtio-net if#2 |                             | vhost-net if#2    |
|                 |                             +-------------------+
|                 |
|                 |                             +-------------------+
| PCI slot #n     | <-- physical connection --> | slave computer #n |
| virtio-net if#n |                             | vhost-net if#n    |
|                 |                             +-------------------+
+-----------------+

The reason for using vhost-net on the "slave" side is because vhost-net
is the component that performs data copies. In most cases, the slave
computers are non-x86 and have DMA controllers. DMA is an absolute
necessity when copying data across the PCI bus.

Do you think virtio is a viable solution to solve this problem? If not,
can you suggest anything else?

Another reason I ask this question is that I have previously invested
several months implementing a similar solution, only to have it outright
rejected for "not being the right way". If you don't think something
like this has any hope, I'd rather not waste another month of my life.
If you can think of a solution that is likely to be "the right way", I'd
rather you told me before I implement any code.

Making my life harder since the last time I tried this, mainline commit
7c5e9ed0c (virtio_ring: remove a level of indirection) has removed the
possibility of using an alternative virtqueue implementation. The commit
message suggests that you might be willing to add this capability back.
Would this be an option?

Thanks for your time,
Ira
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ