lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 02 Feb 2011 13:41:33 -0800
From:	Shirley Ma <mashirle@...ibm.com>
To:	"Michael S. Tsirkin" <mst@...hat.com>
Cc:	Krishna Kumar2 <krkumar2@...ibm.com>,
	David Miller <davem@...emloft.net>, kvm@...r.kernel.org,
	mashirle@...ux.vnet.ibm.com, netdev@...r.kernel.org,
	netdev-owner@...r.kernel.org, Sridhar Samudrala <sri@...ibm.com>,
	Steve Dobbelstein <steved@...ibm.com>
Subject: Re: Network performance with small packets

On Wed, 2011-02-02 at 23:20 +0200, Michael S. Tsirkin wrote:
> > On Wed, 2011-02-02 at 22:17 +0200, Michael S. Tsirkin wrote:
> > > Well, this is also the only case where the queue is stopped, no?
> > Yes. I got some debugging data, I saw that sometimes there were so
> many
> > packets were waiting for free in guest between vhost_signal & guest
> xmit
> > callback.
> 
> What does this mean?

Let's look at the sequence here:

guest start_xmit()
	xmit_skb()
	if ring is full,
		enable_cb()

guest skb_xmit_done()
	disable_cb,
        printk free_old_xmit_skbs <-- it was between more than 1/2 to
full ring size
	printk vq->num_free 

vhost handle_tx()
	if (guest interrupt is enabled)
		signal guest to free xmit buffers

So between guest queue full/stopped queue/enable call back to guest
receives the callback from host to free_old_xmit_skbs, there were about
1/2 to full ring size descriptors available. I thought there were only a
few. (I disabled your vhost patch for this test.)
 

> > Looks like the time spent too long from vhost_signal to guest
> > xmit callback?
> 
> 
> 
> > > > I tried to accumulate multiple guest to host notifications for
> TX
> > > xmits,
> > > > it did help multiple streams TCP_RR results;
> > > I don't see a point to delay used idx update, do you?
> > 
> > It might cause per vhost handle_tx processed more packets.
> 
> I don't understand. It's a couple of writes - what is the issue?

Oh, handle_tx could process more packets per loop for multiple streams
TCP_RR case. I need to print out the data rate per loop to confirm this.

Shirley

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ