lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 3 Feb 2009 05:05:14 -0800 (PST)
From:	david@...g.hm
To:	Evgeniy Polyakov <zbr@...emap.net>
cc:	Herbert Xu <herbert@...dor.apana.org.au>,
	Jarek Poplawski <jarkao2@...il.com>,
	David Miller <davem@...emloft.net>, w@....eu,
	dada1@...mosbay.com, ben@...s.com, mingo@...e.hu,
	linux-kernel@...r.kernel.org, netdev@...r.kernel.org,
	jens.axboe@...cle.com
Subject: Re: [PATCH v2] tcp: splice as many packets as possible at once

On Tue, 3 Feb 2009, Evgeniy Polyakov wrote:

> On Tue, Feb 03, 2009 at 10:24:31PM +1100, Herbert Xu (herbert@...dor.apana.org.au) wrote:
>>> I even believe that for some hardware it is the only way to deal
>>> with the jumbo frames.
>>
>> Not necessarily.  Even if the hardware can only DMA into contiguous
>> memory, we can always allocate a sufficient number of contiguous
>> buffers initially, and then always copy them into fragmented skbs
>> at receive time.  This way the contiguous buffers are never
>> depleted.
>
> How many such preallocated frames is enough? Does it enough to have all
> sockets recv buffer sizes divided by the MTU size? Or just some of them,
> or... That will work but there are way too many corner cases.
>
>> Granted copying sucks, but this is really because the underlying
>> hardware is badly designed.  Also copying is way better than
>> not receiving at all due to memory fragmentation.
>
> Maybe just do not allow jumbo frames when memory is fragmented enough
> and fallback to the smaller MTU in this case? With LRO/GRO stuff there
> should be not that much of the overhead compared to multiple-page
> copies.


1. define 'fragmented enough'

2. the packet size was already negotiated on your existing connections, 
how are you going to change all those on the fly?

3. what do you do when a remote system sends you a large packet? drop it 
on the floor?

having some pool of large buffers to receive into (and copy out of those 
buffers as quickly as possible) would cause a performance hit when things 
get bad, but isn't that better than dropping packets?

as for the number of buffers to use. make a reasonable guess. if you only 
have a small number of packets around, use the buffers directly, as you 
use more of them start copying, as useage climbs attempt to allocate more. 
if you can't allocate more (and you have all of your existing ones in use) 
you will have to drop the packet, but at that point are you really in any 
worse shape than if you didn't have some mechanism to copy out of the 
large buffers?

David Lang
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ