lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090806232335.GC20758@ovro.caltech.edu>
Date:	Thu, 6 Aug 2009 16:23:36 -0700
From:	"Ira W. Snyder" <iws@...o.caltech.edu>
To:	Gregory Haskins <ghaskins@...ell.com>
Cc:	Arnd Bergmann <arnd@...db.de>,
	alacrityvm-devel@...ts.sourceforge.net,
	Avi Kivity <avi@...hat.com>,
	"Michael S. Tsirkin" <mst@...hat.com>, kvm@...r.kernel.org,
	linux-kernel@...r.kernel.org, netdev@...r.kernel.org
Subject: Re: [PATCH 0/7] AlacrityVM guest drivers Reply-To:

On Thu, Aug 06, 2009 at 10:29:08AM -0600, Gregory Haskins wrote:
> >>> On 8/6/2009 at 11:40 AM, in message <200908061740.04276.arnd@...db.de>, Arnd
> Bergmann <arnd@...db.de> wrote: 
> > On Thursday 06 August 2009, Gregory Haskins wrote:

[ big snip ]

> > 
> > 3. The ioq method seems to be the real core of your work that makes
> > venet perform better than virtio-net with its virtqueues. I don't see
> > any reason to doubt that your claim is correct. My conclusion from
> > this would be to add support for ioq to virtio devices, alongside
> > virtqueues, but to leave out the extra bus_type and probing method.
> 
> While I appreciate the sentiment, I doubt that is actually whats helping here.
> 
> There are a variety of factors that I poured into venet/vbus that I think contribute to its superior performance.  However, the difference in the ring design I do not think is one if them.  In fact, in many ways I think Rusty's design might turn out to be faster if put side by side because he was much more careful with cacheline alignment than I was.  Also note that I was careful to not pick one ring vs the other ;)  They both should work.

IMO, the virtio vring design is very well thought out. I found it
relatively easy to port to a host+blade setup, and run virtio-net over a
physical PCI bus, connecting two physical CPUs.

> 
> IMO, we are only looking at the tip of the iceberg when looking at this purely as the difference between virtio-pci vs virtio-vbus, or venet vs virtio-net.
> 
> Really, the big thing I am working on here is the host side device-model.  The idea here was to design a bus model that was conducive to high performance, software to software IO that would work in a variety of environments (that may or may not have PCI).  KVM is one such environment, but I also have people looking at building other types of containers, and even physical systems (host+blade kind of setups).
> 
> The idea is that the "connector" is modular, and then something like virtio-net or venet "just work": in kvm, in the userspace container, on the blade system. 
> 
> It provides a management infrastructure that (hopefully) makes sense for these different types of containers, regardless of whether they have PCI, QEMU, etc (e.g. things that are inherent to KVM, but not others).
> 
> I hope this helps to clarify the project :)
> 

I think this is the major benefit of vbus. I've only started studying
the vbus code, so I don't have lots to say yet. The overview of the
management interface makes it look pretty good.

Getting two virtio-net drivers hooked together in my virtio-over-PCI
patches was nasty. If you read the thread that followed, you'll see
the lack of a management interface as a concern of mine. It was
basically decided that it could come "later". The configfs interface
vbus provides is pretty nice, IMO.

Just my two cents,
Ira
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ