[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240610122939.GA21899@lst.de>
Date: Mon, 10 Jun 2024 14:29:39 +0200
From: Christoph Hellwig <hch@....de>
To: Sagi Grimberg <sagi@...mberg.me>
Cc: Christoph Hellwig <hch@....de>, Jakub Kicinski <kuba@...nel.org>,
Aurelien Aptel <aaptel@...dia.com>, linux-nvme@...ts.infradead.org,
netdev@...r.kernel.org, kbusch@...nel.org, axboe@...com,
chaitanyak@...dia.com, davem@...emloft.net
Subject: Re: [PATCH v25 00/20] nvme-tcp receive offloads
On Mon, Jun 03, 2024 at 10:09:26AM +0300, Sagi Grimberg wrote:
>> IETF has standardized a generic data placement protocol, which is
>> part of iWarp. Even if folks don't like RDMA it exists to solve
>> exactly these kinds of problems of data placement.
>
> iWARP changes the wire protocol.
Compared to plain NVMe over TCP that's a bit of an understatement :)
> Is your comment to just go make people
> use iWARP instead of TCP? or extending NVMe/TCP to natively support DDP?
I don't know to be honest. In many ways just using RDMA instead of
NVMe/TCP would solve all the problems this is trying to solve, but
there are enough big customers that have religious concerns about
the use of RDMA.
So if people want to use something that looks non-RDMA but have the
same benefits we have to reinvent it quite similarly under a different
name. Looking at DDP and what we can learn from it without bringing
the Verbs API along might be one way to do that.
Another would be to figure out what amount of similarity and what
amount of state we need in an on the wire protocol to have an
efficient header splitting in the NIC, either hard coded or even
better downloadable using something like eBPF.
> That would be great, but what does a "vendor independent without hooks"
> look like from
> your perspective? I'd love having this translate to standard (and some new)
> socket operations,
> but I could not find a way that this can be done given the current
> architecture.
Any amount of calls into NIC/offload drivers from NVMe is a nogo.
Powered by blists - more mailing lists