lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 3 Feb 2009 15:07:15 +0300
From:	Evgeniy Polyakov <zbr@...emap.net>
To:	Herbert Xu <herbert@...dor.apana.org.au>
Cc:	Jarek Poplawski <jarkao2@...il.com>,
	David Miller <davem@...emloft.net>, w@....eu,
	dada1@...mosbay.com, ben@...s.com, mingo@...e.hu,
	linux-kernel@...r.kernel.org, netdev@...r.kernel.org,
	jens.axboe@...cle.com
Subject: Re: [PATCH v2] tcp: splice as many packets as possible at once

On Tue, Feb 03, 2009 at 10:53:13PM +1100, Herbert Xu (herbert@...dor.apana.org.au) wrote:
> > How many such preallocated frames is enough? Does it enough to have all
> > sockets recv buffer sizes divided by the MTU size? Or just some of them,
> > or... That will work but there are way too many corner cases.
> 
> Easy, the driver is already allocating them right now so we don't
> have to change a thing :)

How many? A hundred or so descriptors (or even several thousands) -
this really does not scale for the somewhat loaded IO servers, that's
why we frequently get questions why dmesg is filler with order-3 and
higher allocation failure dumps.

> All we have to do is change the refill mechanism to always allocate
> a replacement skb in the rx path, and if that fails, allocate a
> fragmented skb instead and copy the received data into it so that
> the contiguous skb can be reused.

Having a 'reserve' skb per socket is a good idea, but what if numbr of
sockets is way too big?

-- 
	Evgeniy Polyakov
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ