lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <8762ozk1qd.fsf@rustcorp.com.au>
Date:	Wed, 25 May 2011 10:58:26 +0930
From:	Rusty Russell <rusty@...tcorp.com.au>
To:	"Michael S. Tsirkin" <mst@...hat.com>
Cc:	linux-kernel@...r.kernel.org, Carsten Otte <cotte@...ibm.com>,
	Christian Borntraeger <borntraeger@...ibm.com>,
	linux390@...ibm.com, Martin Schwidefsky <schwidefsky@...ibm.com>,
	Heiko Carstens <heiko.carstens@...ibm.com>,
	Shirley Ma <xma@...ibm.com>, lguest@...ts.ozlabs.org,
	virtualization@...ts.linux-foundation.org, netdev@...r.kernel.org,
	linux-s390@...r.kernel.org, kvm@...r.kernel.org,
	Krishna Kumar <krkumar2@...ibm.com>,
	Tom Lendacky <tahm@...ux.vnet.ibm.com>, steved@...ibm.com,
	habanero@...ux.vnet.ibm.com
Subject: Re: [PATCHv2 10/14] virtio_net: limit xmit polling

On Mon, 23 May 2011 14:19:00 +0300, "Michael S. Tsirkin" <mst@...hat.com> wrote:
> On Mon, May 23, 2011 at 11:37:15AM +0930, Rusty Russell wrote:
> > Can we hit problems with OOM?  Sure, but no worse than now...
> > The problem is that this "virtqueue_get_capacity()" returns the worst
> > case, not the normal case.  So using it is deceptive.
> > 
> 
> Maybe just document this?

Yes, but also by renaming virtqueue_get_capacity().  Takes it from a 3
to a 6 on the API hard-to-misuse scale.

How about, virtqueue_min_capacity()?  Makes the reader realize something
weird is going on.

> I still believe capacity really needs to be decided
> at the virtqueue level, not in the driver.
> E.g. with indirect each skb uses a single entry: freeing
> 1 small skb is always enough to have space for a large one.
> 
> I do understand how it seems a waste to leave direct space
> in the ring while we might in practice have space
> due to indirect. Didn't come up with a nice way to
> solve this yet - but 'no worse than now :)'

Agreed.

> > > I just wanted to localize the 2+MAX_SKB_FRAGS logic that tries to make
> > > sure we have enough space in the buffer. Another way to do
> > > that is with a define :).
> > 
> > To do this properly, we should really be using the actual number of sg
> > elements needed, but we'd have to do most of xmit_skb beforehand so we
> > know how many.
> > 
> > Cheers,
> > Rusty.
> 
> Maybe I'm confused here.  The problem isn't the failing
> add_buf for the given skb IIUC.  What we are trying to do here is stop
> the queue *before xmit_skb fails*. We can't look at the
> number of fragments in the current skb - the next one can be
> much larger.  That's why we check capacity after xmit_skb,
> not before it, right?

No, I was confused...  More coffee!

Thanks,
Rusty.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ