[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <9922111a-63e6-468c-b2de-f9899e5b95cc@gmail.com>
Date: Mon, 28 Jul 2025 20:42:15 +0100
From: Pavel Begunkov <asml.silence@...il.com>
To: Mina Almasry <almasrymina@...gle.com>
Cc: Jakub Kicinski <kuba@...nel.org>, netdev@...r.kernel.org,
io-uring@...r.kernel.org, Eric Dumazet <edumazet@...gle.com>,
Willem de Bruijn <willemb@...gle.com>, Paolo Abeni <pabeni@...hat.com>,
andrew+netdev@...n.ch, horms@...nel.org, davem@...emloft.net,
sdf@...ichev.me, dw@...idwei.uk, michael.chan@...adcom.com,
dtatulea@...dia.com, ap420073@...il.com
Subject: Re: [RFC v1 00/22] Large rx buffer support for zcrx
On 7/28/25 19:54, Mina Almasry wrote:
> On Mon, Jul 28, 2025 at 4:03 AM Pavel Begunkov <asml.silence@...il.com> wrote:
>>
>> This series implements large rx buffer support for io_uring/zcrx on
>> top of Jakub's queue configuration changes, but it can also be used
>> by other memory providers. Large rx buffers can be drastically
>> beneficial with high-end hw-gro enabled cards that can coalesce traffic
>> into larger pages, reducing the number of frags traversing the network
>> stack and resuling in larger contiguous chunks of data for the
>> userspace. Benchamrks showed up to ~30% improvement in CPU util.
>>
>
> Very exciting.
>
> I have not yet had a chance to thoroughly look, but even still I have
> a few high level questions/concerns. Maybe you already have answers to
> them that can make my life a bit easier as I try to take a thorough
> look.
>
> - I'm a bit confused that you're not making changes to the core net
> stack to support non-PAGE_SIZE netmems. From a quick glance, it seems
> that there are potentially a ton of places in the net stack that
> assume PAGE_SIZE:
The stack already supports large frags and it's not new. Page pools
has higher order allocations, see __page_pool_alloc_page_order. The
tx path can allocate large pages / coalesce user pages. Any specific
place that concerns you? There are many places legitimately using
PAGE_SIZE: kmap'ing folios, shifting it by order to get the size,
linear allocations, etc.
> cd net
> ackc "PAGE_SIZE|PAGE_SHIFT" | wc -l
> 468
>
> Are we sure none of these places assuming PAGE_SIZE or PAGE_SHIFT are
> concerning?
>
> - You're not adding a field in the net_iov that tells us how big the
> net_iov is. It seems to me you're configuring the driver to set the rx
> buffer size, then assuming all the pp allocations are of that size,
> then assuming in the rxzc code that all the net_iov are of that size.
> I think a few problems may happen?
>
> (a) what happens if the rx buffer size is re-configured? Does the
> io_uring rxrc instance get recreated as well?
Any reason you even want it to work? You can't and frankly
shouldn't be allowed to, at least in case of io_uring. Unless it's
rejected somewhere earlier, in this case it'll fail on the order
check while trying to create a page pool with a zcrx provider.
> (b) what happens with skb coalescing? skb coalescing is already a bit
> of a mess. We don't allow coalescing unreadable and readable skbs, but
> we do allow coalescing devmem and iozcrx skbs which could lead to some
> bugs I'm guessing already. AFAICT as of this patch series we may allow
> coalescing of skbs with netmems inside of them of different sizes, but
> AFAICT so far, the iozcrx assume the size is constant across all the
> netmems it gets, which I'm not sure is always true?
It rejects niovs from other providers incl. from any other io_uring
instances, so it only assume a uniform size for its own niovs. The
backing memory is verified that it can be chunked.
> For all these reasons I had assumed that we'd need space in the
> net_iov that tells us its size: net_iov->size.
Nope, not in this case.
> And then netmem_size(netmem) would replace all the PAGE_SIZE
> assumptions in the net stack, and then we'd disallow coalescing of
> skbs with different-sized netmems (else we need to handle them
> correctly per the netmem_size).
I'm not even sure what's the concern. What's the difference b/w
tcp_recvmsg_dmabuf() getting one skb with differently sized frags
or same frags in separate skbs? You still need to handle it
somehow, even if by failing.
Also, we should never coalesce different niovs together regardless
of sizes. And for coalescing two chunks of the same niov, it should
work just fine even without knowing the length.
skb_can_coalesce_netmem {
...
return netmem == skb_frag_netmem(frag) &&
off == skb_frag_off(frag) + skb_frag_size(frag);
}
Essentially, for devmem only tcp_recvmsg_dmabuf() and other
devmem specific code would need to know about the niov size.
--
Pavel Begunkov
Powered by blists - more mailing lists