[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <52597d29-6de4-4292-b3f0-743266a8dcff@gmail.com>
Date: Mon, 28 Jul 2025 23:44:11 +0100
From: Pavel Begunkov <asml.silence@...il.com>
To: Stanislav Fomichev <stfomichev@...il.com>
Cc: Jakub Kicinski <kuba@...nel.org>, netdev@...r.kernel.org,
io-uring@...r.kernel.org, Eric Dumazet <edumazet@...gle.com>,
Willem de Bruijn <willemb@...gle.com>, Paolo Abeni <pabeni@...hat.com>,
andrew+netdev@...n.ch, horms@...nel.org, davem@...emloft.net,
sdf@...ichev.me, almasrymina@...gle.com, dw@...idwei.uk,
michael.chan@...adcom.com, dtatulea@...dia.com, ap420073@...il.com
Subject: Re: [RFC v1 00/22] Large rx buffer support for zcrx
On 7/28/25 23:06, Stanislav Fomichev wrote:
> On 07/28, Pavel Begunkov wrote:
>> On 7/28/25 21:21, Stanislav Fomichev wrote:
>>> On 07/28, Pavel Begunkov wrote:
>>>> On 7/28/25 18:13, Stanislav Fomichev wrote:
>> ...>>> Supporting big buffers is the right direction, but I have the same
>>>>> feedback:
>>>>
>>>> Let me actually check the feedback for the queue config RFC...
>>>>
>>>> it would be nice to fit a cohesive story for the devmem as well.
>>>>
>>>> Only the last patch is zcrx specific, the rest is agnostic,
>>>> devmem can absolutely reuse that. I don't think there are any
>>>> issues wiring up devmem?
>>>
>>> Right, but the patch number 2 exposes per-queue rx-buf-len which
>>> I'm not sure is the right fit for devmem, see below. If all you
>>
>> I guess you're talking about uapi setting it, because as an
>> internal per queue parameter IMHO it does make sense for devmem.
>>
>>> care is exposing it via io_uring, maybe don't expose it from netlink for
>>
>> Sure, I can remove the set operation.
>>
>>> now? Although I'm not sure I understand why you're also passing
>>> this per-queue value via io_uring. Can you not inherit it from the
>>> queue config?
>>
>> It's not a great option. It complicates user space with netlink.
>> And there are convenience configuration features in the future
>> that requires io_uring to parse memory first. E.g. instead of
>> user specifying a particular size, it can say "choose the largest
>> length under 32K that the backing memory allows".
>
> Don't you already need a bunch of netlink to setup rss and flow
Could be needed, but there are cases where configuration and
virtual queue selection is done outside the program. I'll need
to ask which option we currently use.
> steering? And if we end up adding queue api, you'll have to call that
> one over netlink also.
There is already a queue api, even though it's cropped IIUC.
What kind of extra setup you have in mind?
>>>
>>> If we assume that at some point niov can be backed up by chunks larger
>>> than PAGE_SIZE, the assumed workflow for devemem is:
>>> 1. change rx-buf-len to 32K
>>> - this is needed only for devmem, but not for CPU RAM, but we'll have
>>> to refill the queues from the main memory anyway
>>
>> Urgh, that's another reason why I prefer to just pass it through
>> zcrx and not netlink. So maybe you can just pass the len to devmem
>> on creation, and internally it sets up its queues with it.
>
> But you still need to solve MAX_PAGE_ORDER/PAGE_ALLOC_COSTLY_ORDER I
> think? We don't want the drivers to do PAGE_ALLOC_COSTLY_ORDER costly
> allocation presumably?
#define PAGE_ALLOC_COSTLY_ORDER 3
It's "costly" for the page allocator and not a custom specially
cooked memory providers. Nobody should care as long as the length
applies to the given provider only. MAX_PAGE_ORDER also seems to
be a page allocator thing.
--
Pavel Begunkov
Powered by blists - more mailing lists