[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <eb34f812-a866-a1a3-9f9b-7d5054d17609@kernel.org>
Date: Tue, 18 Jul 2023 16:35:17 -0600
From: David Ahern <dsahern@...nel.org>
To: Jakub Kicinski <kuba@...nel.org>
Cc: Jason Gunthorpe <jgg@...pe.ca>, Mina Almasry <almasrymina@...gle.com>,
Andy Lutomirski <luto@...nel.org>, linux-kernel@...r.kernel.org,
linux-media@...r.kernel.org, dri-devel@...ts.freedesktop.org,
linaro-mm-sig@...ts.linaro.org, netdev@...r.kernel.org,
linux-arch@...r.kernel.org, linux-kselftest@...r.kernel.org,
Sumit Semwal <sumit.semwal@...aro.org>,
Christian König <christian.koenig@....com>,
"David S. Miller" <davem@...emloft.net>, Eric Dumazet <edumazet@...gle.com>,
Paolo Abeni <pabeni@...hat.com>, Jesper Dangaard Brouer <hawk@...nel.org>,
Ilias Apalodimas <ilias.apalodimas@...aro.org>, Arnd Bergmann
<arnd@...db.de>, Willem de Bruijn <willemdebruijn.kernel@...il.com>,
Shuah Khan <shuah@...nel.org>
Subject: Re: [RFC PATCH 00/10] Device Memory TCP
On 7/18/23 12:29 PM, Jakub Kicinski wrote:
> On Tue, 18 Jul 2023 12:20:59 -0600 David Ahern wrote:
>> On 7/18/23 12:15 PM, Jakub Kicinski wrote:
>>> On Tue, 18 Jul 2023 15:06:29 -0300 Jason Gunthorpe wrote:
>>>> netlink feels like a weird API choice for that, in particular it would
>>>> be really wrong to somehow bind the lifecycle of a netlink object to a
>>>> process.
>>>
>>> Netlink is the right API, life cycle of objects can be easily tied to
>>> a netlink socket.
>>
>> That is an untuitive connection -- memory references, h/w queues, flow
>> steering should be tied to the datapath socket, not a control plane socket.
>
> There's one RSS context for may datapath sockets. Plus a lot of the
> APIs already exist, and it's more of a question of packaging them up
> at the user space level. For things which do not have an API, however,
> netlink, please.
I do not see how 1 RSS context (or more specifically a h/w Rx queue) can
be used properly with memory from different processes (or dma-buf
references). When the process dies, that memory needs to be flushed from
the H/W queues. Queues with interlaced submissions make that more
complicated.
I guess the devil is in the details; I look forward to the evolution of
the patches.
Powered by blists - more mailing lists