lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 07 Aug 2009 11:05:44 -0400
From:	Gregory Haskins <gregory.haskins@...il.com>
To:	Anthony Liguori <anthony@...emonkey.ws>
CC:	"Michael S. Tsirkin" <mst@...hat.com>,
	Gregory Haskins <ghaskins@...ell.com>,
	linux-kernel@...r.kernel.org,
	alacrityvm-devel@...ts.sourceforge.net, netdev@...r.kernel.org,
	kvm@...r.kernel.org
Subject: Re: [PATCH 0/7] AlacrityVM guest drivers

Anthony Liguori wrote:
> Michael S. Tsirkin wrote:
>>
>>> This series includes the basic plumbing, as well as the driver for
>>> accelerated 802.x (ethernet) networking.
>>>     
>>
>> The graphs comparing virtio with vbus look interesting.
>>   
> 
> 1gbit throughput on a 10gbit link?  I have a hard time believing that.
> 
> I've seen much higher myself.  Can you describe your test setup in more
> detail?

Sure,

For those graphs, two 8-core x86_64 boxes with Chelsio T3 10GE connected
back to back via cross over with 1500mtu.  The kernel version was as
posted.  The qemu version was generally something very close to
qemu-kvm.git HEAD at the time the data was gathered, but unfortunately I
didn't seem to log this info.

For KVM, we take one of those boxes and run a bridge+tap configuration
on top of that.  We always run the server on the bare-metal machine on
the remote side of the link regardless of whether we run the client in a
VM or baremetal.

For guests, virtio-net and venet connect to the same linux bridge
instance, I just "ifdown eth0 / ifup eth1" (or vice versa) and repeat
the same test.  I do this multiple times (usually about 10) and average
the result.  I use several different programs, such as netperf, rsync,
and ping to take measurements.

That said, note that the graphs were from earlier kernel runs (2.6.28,
29-rc8).  The most recent data I can find that I published is for
2.6.29, announced with the vbus-v3 release back in April:

http://lkml.org/lkml/2009/4/21/408

In it, the virtio-net throughput numbers are substantially higher and
possibly more in line with your expectations (4.5gb/s) (though notably
still lagging venet, which weighed in at 5.6gb/s).

Generally, I find that the virtio-net exhibits non-deterministic results
from release to release.  I suspect (as we have discussed) the
tx-mitigation scheme.  Some releases buffer the daylights out of the
stream, and virtio gets close(r) throughput (e.g. 4.5g vs 5.8g, but
absolutely terrible latency (4000us vs 65us).  Other releases it seems
to operate with more of a compromise (1.3gb/s vs 3.8gb/s, but 350us vs
85us).

I do not understand what causes the virtio performance fluctuation, as I
use the same kernel config across builds, and I do not typically change
the qemu userspace.  Note that some general fluctuation is evident
across the board just from kernel to kernel.  I am referring to more of
the disparity in throughput vs latency than the ultimate numbers, as all
targets seem to scale max throughput about the same per kernel.

That said, I know I need to redo the graphs against HEAD (31-rc5, and
perhaps 30, and kvm.git).  I've been heads down with the eventfd
interfaces since vbus-v3 so I havent been as active with generating the
results. I did confirm that vbus-v4 (alacrityvm-v0.1) still produces a
similar graph, but I didn't gather the data as scientifically as I would
feel comfortable publishing a graph for.  This is on the TODO list.

If there is another patch-series/tree I should be using for comparison,
please point me at it.

HTH

Kind Regards,
-Greg


Download attachment "signature.asc" of type "application/pgp-signature" (268 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ