lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 2 Feb 2011 06:42:22 +0200
From:	"Michael S. Tsirkin" <mst@...hat.com>
To:	Krishna Kumar2 <krkumar2@...ibm.com>
Cc:	David Miller <davem@...emloft.net>, kvm@...r.kernel.org,
	Shirley Ma <mashirle@...ibm.com>, netdev@...r.kernel.org,
	rusty@...tcorp.com.au, steved@...ibm.com
Subject: Re: Network performance with small packets

On Wed, Feb 02, 2011 at 10:09:18AM +0530, Krishna Kumar2 wrote:
> > "Michael S. Tsirkin" <mst@...hat.com> 02/02/2011 03:11 AM
> >
> > On Tue, Feb 01, 2011 at 01:28:45PM -0800, Shirley Ma wrote:
> > > On Tue, 2011-02-01 at 23:21 +0200, Michael S. Tsirkin wrote:
> > > > Confused. We compare capacity to skb frags, no?
> > > > That's sg I think ...
> > >
> > > Current guest kernel use indirect buffers, num_free returns how many
> > > available descriptors not skb frags. So it's wrong here.
> > >
> > > Shirley
> >
> > I see. Good point. In other words when we complete the buffer
> > it was indirect, but when we add a new one we
> > can not allocate indirect so we consume.
> > And then we start the queue and add will fail.
> > I guess we need some kind of API to figure out
> > whether the buf we complete was indirect?
> >
> > Another failure mode is when skb_xmit_done
> > wakes the queue: it might be too early, there
> > might not be space for the next packet in the vq yet.
> 
> I am not sure if this is the problem - shouldn't you
> see these messages:
> 	if (likely(capacity == -ENOMEM)) {
> 		dev_warn(&dev->dev,
> 			"TX queue failure: out of memory\n");
> 	} else {
> 		dev->stats.tx_fifo_errors++;
> 		dev_warn(&dev->dev,
> 			"Unexpected TX queue failure: %d\n",
> 			capacity);
> 	}
> in next xmit? I am not getting this in my testing.

Yes, I don't think we hit this in our testing,
simply because we don't stress memory.
Disable indirect, then you might see this.

> > A solution might be to keep some kind of pool
> > around for indirect, we wanted to do it for block anyway ...
> 
> Your vhost patch should fix this automatically. Right?

Reduce the chance of it happening, yes.

> 
> Thanks,
> 
> - KK
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ