[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <275aa42e-51ba-1b31-15aa-3528dc29b447@kernel.org>
Date: Thu, 17 Aug 2023 20:21:49 -0600
From: David Ahern <dsahern@...nel.org>
To: Jakub Kicinski <kuba@...nel.org>
Cc: Willem de Bruijn <willemdebruijn.kernel@...il.com>,
Mina Almasry <almasrymina@...gle.com>, netdev@...r.kernel.org,
Eric Dumazet <edumazet@...gle.com>, Paolo Abeni <pabeni@...hat.com>,
Jesper Dangaard Brouer <hawk@...nel.org>,
Ilias Apalodimas <ilias.apalodimas@...aro.org>,
Magnus Karlsson <magnus.karlsson@...el.com>, sdf@...gle.com,
Willem de Bruijn <willemb@...gle.com>, Kaiyuan Zhang <kaiyuanz@...gle.com>
Subject: Re: [RFC PATCH v2 02/11] netdev: implement netlink api to bind
dma-buf to netdevice
On 8/17/23 8:09 PM, Jakub Kicinski wrote:
>>
>> Flow steering to TC offloads -- more details on what you were thinking here?
>
> I think TC flower can do almost everything ethtool -N can.
> So do we continue to developer for both APIs or pick one?
ok, tc flower; that did not come to mind. Don't use it often.
>
>>>> I don't have a good sense of what a good model for cleanup and
>>>> permissions is (B). All I know is that if we need to tie things to
>>>> processes netlink can do it, and we shouldn't have to create our
>>>> own FS and special file descriptors...
>>
>> From my perspective the main sticking point that has not been handled is
>> flushing buffers from the RxQ, but there is 100% tied to queue
>> management and a process' ability to effect a flush or queue tear down -
>> and that is the focus of your list below:
>
> If you're thinking about it from the perspective of "application died
> give me back all the buffers" - the RxQ is just one piece, right?
> As we discovered with page pool - packets may get stuck in stack for
> ever.
Yes, flushing the retransmit queue for TCP is one of those places where
buffer references can get stuck for some amount of time.
>>
>> `ethtool -L/-G` and `ip link set {up/down}` pertain to the "general OS"
>> queues managed by a driver for generic workloads and networking
>> management (e.g., neigh discovery, icmp, etc). The discussions here
>> pertains to processes wanting to use their own memory or GPU memory in a
>> queue. Processes will come and go and the queue management needs to
>> align with that need without affecting all of the other queues managed
>> by the driver.
>
> For sure, I'm just saying that both the old uAPI can be translated to
> the new driver API, and so should the new uAPIs. I focused on the
> driver facing APIs because I think that it's the hard part. We have
> many drivers, the uAPI is more easily dreamed up, no?
sure.
Powered by blists - more mailing lists