[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <281b12de-7730-05a7-1187-e4f702c19cda@gmail.com>
Date: Fri, 11 Dec 2020 12:59:52 -0700
From: David Ahern <dsahern@...il.com>
To: Jakub Kicinski <kuba@...nel.org>
Cc: Boris Pismenny <borispismenny@...il.com>,
Boris Pismenny <borisp@...lanox.com>, davem@...emloft.net,
saeedm@...dia.com, hch@....de, sagi@...mberg.me, axboe@...com,
kbusch@...nel.org, viro@...iv.linux.org.uk, edumazet@...gle.com,
boris.pismenny@...il.com, linux-nvme@...ts.infradead.org,
netdev@...r.kernel.org, benishay@...dia.com, ogerlitz@...dia.com,
yorayz@...dia.com, Ben Ben-Ishay <benishay@...lanox.com>,
Or Gerlitz <ogerlitz@...lanox.com>,
Yoray Zack <yorayz@...lanox.com>,
Boris Pismenny <borisp@...dia.com>
Subject: Re: [PATCH v1 net-next 02/15] net: Introduce direct data placement
tcp offload
On 12/11/20 11:45 AM, Jakub Kicinski wrote:
> Ack, these patches are not exciting (to me), so I'm wondering if there
> is a better way. The only reason NIC would have to understand a ULP for
> ZC is to parse out header/message lengths. There's gotta be a way to
> pass those in header options or such...
>
> And, you know, if we figure something out - maybe we stand a chance
> against having 4 different zero copy implementations (this, TCP,
> AF_XDP, netgpu) :(
AF_XDP is for userspace IP/TCP/packet handling.
netgpu is fundamentally kernel stack socket with zerocopy direct to
userspace buffers. Yes, it has more complications like the process is
running on a GPU, but essential characteristics are header / payload
split, a kernel stack managed socket, and dedicated H/W queues for the
socket and those are all common to what the nvme patches are doing. So,
can we create a common framework for those characteristics which enables
other use cases (like a more generic Rx zerocopy)?
Powered by blists - more mailing lists