lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <50579AAD.6030004@hp.com>
Date:	Mon, 17 Sep 2012 14:48:29 -0700
From:	Rick Jones <rick.jones2@...com>
To:	Gregory Carter <gcarter@...gi.com>
CC:	netdev@...r.kernel.org, kvm@...r.kernel.org,
	Lee Schermerhorn <Lee.Schermerhorn@...com>,
	Brian Haley <Brian.Haley@...com>
Subject: Re: NIC emulation with built-in rate limiting?

So, while the question includes the "stability" of how things get 
plumbed for a VM and whether moving some of that into the NIC emulation 
might help :)  I've gone ahead and re-run the experiment with bare-iron. 
  This time just for kicks I used 50 Mbit/s throttle inbound and 
outbound.  The results can be seen in:

ftp://ftp.netperf.org/50_mbits.tgz

Since this is now bare-iron, inbound is ingress and outbound is egress. 
  That is reversed from what it would be for a VM situation where VM 
outbound traverses the ingress filter and VM inbound traverses the 
egress qdisc.

Both systems were running Ubuntu 12.04.01 3.2.0-26 kernels, there was 
plenty of CPU horsepower (2x E5-2680s in this case) and the network 
between them was 10GbE using their 530FLB LOMs (BCM 57810S) connected 
via a ProCurve 6120 10GbE switch.  That simply happened to be the most 
convenient bare-iron hardware I had on hand as one of the cobbler's 
children.  There was no X running on the systems, the only thing of note 
running on them was netperf.

So, is the comparative instability between inbound and outbound 
fundamentally inherent in using ingress policing, or more a matter of 
"Silly Rick, you should be using <these settings> instead?"

If the former, is it then worthwhile to try to have NIC emulation only 
pull from the VM at the emulated rate, to keep the queues in the VM 
where it can react to them more directly?  And are there any NIC 
emulations doing that already (as virtio does not seem to at present)?

happy benchmarking,

rick jones
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ