lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <OF381CCA86.B82857CC-ON86257826.00629F88-86257826.006596F5@us.ibm.com>
Date:	Fri, 28 Jan 2011 12:29:37 -0600
From:	Steve Dobbelstein <steved@...ibm.com>
To:	mashirle@...ux.vnet.ibm.com
Cc:	kvm@...r.kernel.org, "Michael S. Tsirkin" <mst@...hat.com>,
	netdev@...r.kernel.org
Subject: Re: Network performance with small packets

mashirle@...ux.vnet.ibm.com wrote on 01/27/2011 02:15:05 PM:

> On Thu, 2011-01-27 at 22:05 +0200, Michael S. Tsirkin wrote:
> > One simple theory is that guest net stack became faster
> > and so the host can't keep up.
>
> Yes, that's what I think here. Some qdisc code has been changed
> recently.

I ran a test with txqueuelen set to 128, instead of the default of 1000, in
the guest in an attempt to slow down the guest transmits.  The change had
no effect on the throughput nor on the CPU usage.

On the other hand, I ran some tests with different CPU pinnings and
with/without hyperthreading enabled.  Here is a summary of the results.

Pinning configuration 1:  pin the VCPUs and pin the vhost thread to one of
the VCPU CPUs
Pinning configuration 2:  pin the VCPUs and pin the vhost thread to a
separate CPU on the same socket
Pinning configuration 3:  pin the VCPUs and pin the vhost thread to a
separate CPU a different socket

HT   Pinning   Throughput  CPU
Yes  config 1  - 40%       - 40%
Yes  config 2  - 37%       - 35%
Yes  config 3  - 37%       - 36%
No   none         0%       -  5%
No   config 1  - 41%       - 43%
No   config 2  + 32%       -  4%
No   config 3  + 34%       +  9%

Pinning the vhost thread to the same CPU as a guest VCPU hurts performance.
Turning off hyperthreading and pinning the VPUS and vhost thread to
separate CPUs significantly improves performance, getting it into the
competitive range with other hypervisors.

Steve D.

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ