lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Mon, 8 Jan 2024 17:17:15 +0100
From: Alexander Lobakin <aleksander.lobakin@...el.com>
To: Willem de Bruijn <willemdebruijn.kernel@...il.com>
CC: "David S. Miller" <davem@...emloft.net>, Eric Dumazet
	<edumazet@...gle.com>, Jakub Kicinski <kuba@...nel.org>, Paolo Abeni
	<pabeni@...hat.com>, Maciej Fijalkowski <maciej.fijalkowski@...el.com>,
	Michal Kubiak <michal.kubiak@...el.com>, Larysa Zaremba
	<larysa.zaremba@...el.com>, Alexei Starovoitov <ast@...nel.org>, "Daniel
 Borkmann" <daniel@...earbox.net>, <intel-wired-lan@...ts.osuosl.org>,
	<netdev@...r.kernel.org>, <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH RFC net-next 05/34] idpf: convert header split mode to
 libie + napi_build_skb()

From: Willem De Bruijn <willemdebruijn.kernel@...il.com>
Date: Wed, 27 Dec 2023 10:30:48 -0500

> Alexander Lobakin wrote:
>> Currently, idpf uses the following model for the header buffers:
>>
>> * buffers are allocated via dma_alloc_coherent();
>> * when receiving, napi_alloc_skb() is called and then the header is
>>   copied to the newly allocated linear part.
>>
>> This is far from optimal as DMA coherent zone is slow on many systems
>> and memcpy() neutralizes the idea and benefits of the header split.
> 
> Do you have data showing this?

Showing slow coherent DMA or memcpy()?
Try MIPS for the first one.
For the second -- try comparing performance on ice with the "legacy-rx"
private flag disabled and enabled.

> 
> The assumption for the current model is that the headers will be
> touched shortly after, so the copy just primes the cache.

They won't be touched in many cases. E.g. XDP_DROP.
Or headers can be long. memcpy(32) != memcpy(128).
The current model allocates a new skb with a linear part, which is a
real memory allocation. napi_build_skb() doesn't allocate anything
except struct sk_buff, which is usually available in the NAPI percpu cache.
If build_skb() wasn't more effective, it wouldn't be introduced.
The current model just assumes default socket traffic with ~40-byte
headers and no XDP etc.

> 
> The single coherently allocated region for all headers reduces
> IOTLB pressure.

page_pool pages are mapped once at allocation.

> 
> It is possible that the alternative model is faster. But that is not
> trivially obvious.
> 
> I think patches like this can stand on their own. Probably best to
> leave them out of the dependency series to enable XDP and AF_XDP.

You can't do XDP on DMA coherent zone. To do this memcpy(), you need
allocate a new skb with a linear part, which is usually done after XDP,
otherwise it's too much overhead and little-to-no benefits comparing to
generic skb XDP.
The current idpf code is just not compatible with the XDP code in this
series, it's pointless to do double work.

Disabling header split when XDP is enabled (alternative option) means
disabling TCP zerocopy and worse performance in general, I don't
consider this.

> 
>> Instead, use libie to create page_pools for the header buffers, allocate
>> them dynamically and then build an skb via napi_build_skb() around them
>> with no memory copy. With one exception...
>> When you enable header split, you except you'll always have a separate
>> header buffer, so that you could reserve headroom and tailroom only
>> there and then use full buffers for the data. For example, this is how
>> TCP zerocopy works -- you have to have the payload aligned to PAGE_SIZE.
>> The current hardware running idpf does *not* guarantee that you'll
>> always have headers placed separately. For example, on my setup, even
>> ICMP packets are written as one piece to the data buffers. You can't
>> build a valid skb around a data buffer in this case.
>> To not complicate things and not lose TCP zerocopy etc., when such thing
>> happens, use the empty header buffer and pull either full frame (if it's
>> short) or the Ethernet header there and build an skb around it. GRO
>> layer will pull more from the data buffer later. This W/A will hopefully
>> be removed one day.
>>
>> Signed-off-by: Alexander Lobakin <aleksander.lobakin@...el.com>

Thanks,
Olek

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ