lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 22 Dec 2009 15:14:18 -0600
From:	Anthony Liguori <anthony@...emonkey.ws>
To:	Andi Kleen <andi@...stfloor.org>
CC:	Gregory Haskins <gregory.haskins@...il.com>,
	Avi Kivity <avi@...hat.com>, Ingo Molnar <mingo@...e.hu>,
	kvm@...r.kernel.org, Andrew Morton <akpm@...ux-foundation.org>,
	torvalds@...ux-foundation.org,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	netdev@...r.kernel.org,
	"alacrityvm-devel@...ts.sourceforge.net" 
	<alacrityvm-devel@...ts.sourceforge.net>
Subject: Re: [GIT PULL] AlacrityVM guest drivers for 2.6.33

On 12/22/2009 11:33 AM, Andi Kleen wrote:
>> We're not talking about vaporware.  vhost-net exists.
>>      
> Is it as fast as the alacrityvm setup then e.g. for network traffic?
>
> Last I heard the first could do wirespeed 10Gbit/s on standard hardware.
>    

I'm very wary of any such claims.  As far as I know, no one has done an 
exhaustive study of vbus and published the results.  This is why it's so 
important to understand why the results are what they are when we see 
numbers posted.

For instance, check out 
http://www.redhat.com/f/pdf/summit/cwright_11_open_source_virt.pdf slide 32.

These benchmarks show KVM without vhost-net pretty closely pacing 
native.  With large message sizes, it's awfully close to line rate.

Comparatively speaking, consider 
http://developer.novell.com/wiki/index.php/AlacrityVM/Results

vbus here is pretty far off of native and virtio-net is ridiculus.

Why are the results so different?  Because benchmarking is fickle and 
networking performance is complicated.  No one benchmarking scenario is 
going to give you a very good picture overall.  It's also relatively 
easy to stack the cards in favor of one approach verses another.  The 
virtio-net setup probably made extensive use of pinning and other tricks 
to make things faster than a normal user would see them.  It ends up 
creating a perfect combination of batching which is pretty much just 
cooking the mitigation schemes to do extremely well for one benchmark.

This is why it's so important to look at vbus from the perspective of 
critically asking, what precisely makes it better than virtio.  A couple 
benchmarks on a single piece of hardware does not constitute an 
existence proof that it's better overall.

There are a ton of differences between virtio and vbus because vbus was 
written in a vacuum wrt virtio.  I'm not saying we are totally committed 
to virtio no matter what, but it should take a whole lot more than a 
couple netperf runs on a single piece of hardware for a single kind of 
driver to justify replacing it.

> Can vhost-net do the same thing?

I think the fundamentally question is, what makes vbus better than 
vhost-net?  vhost-net exists and is further along upstream than vbus is 
at the moment.  If that question cannot be answered with technical facts 
and numbers to back them up, then we're just arguing for the sake of 
arguing.

Regards,

Anthony Liguori

> -Andi
>    

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ