lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 09 Mar 2011 16:59:05 -0800
From:	Shirley Ma <mashirle@...ibm.com>
To:	"Michael S. Tsirkin" <mst@...hat.com>
Cc:	Tom Lendacky <tahm@...ux.vnet.ibm.com>,
	Rusty Russell <rusty@...tcorp.com.au>,
	Krishna Kumar2 <krkumar2@...ibm.com>,
	David Miller <davem@...emloft.net>, kvm@...r.kernel.org,
	netdev@...r.kernel.org, steved@...ibm.com
Subject: Re: Network performance with small packets - continued

On Wed, 2011-03-09 at 23:56 +0200, Michael S. Tsirkin wrote:
> >   Txn Rate: 153,696.59 Txn/Sec, Pkt Rate: 305,358 Pkgs/Sec
> >   Exits: 62,603.37 Exits/Sec
> >   TxCPU: 3.73%  RxCPU: 98.52%
> >   Virtio1-input  Interrupts/Sec (CPU0/CPU1): 11,564/0
> >   Virtio1-output Interrupts/Sec (CPU0/CPU1): 0/0
> > 
> > About a 77% increase over baseline and about 74% of baremetal.
> 
> Hmm we get about 20 packets per interrupt on average.
> That's pretty decent. The problem is with exits.
> Let's try something adaptive in the host? 

I did some hack before, for 32-64 multiple stream TCP_RR cases, either
queue multiple skbs per kick or delay vhost exit from handle_tx, both
improved TCP_RR aggregation performance, but single TCP_RR latency
increased.

Here, the test is about 100 TCP_RR streams from a bare metal client to
KVM guest, the kick_notify from guest RX path should be small (every 1/2
ring size, it does a kick and even under that kick, vhost might already
disable the notification). 

The kick_notify from guest TX path seems the main reason causes the
guest huge of exits, (it does a kick for every send skb, under that kick
vhost might mostly likely exit from empty ring not reaching
VHOST_NET_WEIGH. The indirect buffer is used, so I wonder how many
packets per handle_tx processed here?

In theory, for lots of TCP_RR streams, the guest should be able to keep
sending xmit skbs to send vq, so vhost should be able to disable
notification most of the time, then number of guest exits should be
significantly reduced? Why we saw lots of guest exits here still? Is it
worth to try 256 (send queue size) TCP_RRs?

Tom's kick_notify data from Rusty's patch would be helpful to understand
what's going here.

Thanks
Shirley



--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ