[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <a41ff414-4286-e5e9-5b80-85d87533361e@grimberg.me>
Date: Mon, 9 Nov 2020 15:23:54 -0800
From: Sagi Grimberg <sagi@...mberg.me>
To: Shai Malin <smalin@...vell.com>,
"linux-nvme@...ts.infradead.org" <linux-nvme@...ts.infradead.org>,
Boris Pismenny <borispismenny@...il.com>,
Boris Pismenny <borisp@...lanox.com>,
"kuba@...nel.org" <kuba@...nel.org>,
"davem@...emloft.net" <davem@...emloft.net>,
"saeedm@...dia.com" <saeedm@...dia.com>, "hch@....de" <hch@....de>,
"axboe@...com" <axboe@...com>,
"kbusch@...nel.org" <kbusch@...nel.org>,
"viro@...iv.linux.org.uk" <viro@...iv.linux.org.uk>,
"edumazet@...gle.com" <edumazet@...gle.com>
Cc: Yoray Zack <yorayz@...lanox.com>,
Ben Ben-Ishay <benishay@...lanox.com>,
"boris.pismenny@...il.com" <boris.pismenny@...il.com>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
Or Gerlitz <ogerlitz@...lanox.com>,
Ariel Elior <aelior@...vell.com>,
Michal Kalderon <mkalderon@...vell.com>
Subject: Re: [PATCH net-next RFC v1 05/10] nvme-tcp: Add DDP offload control
path
>>> diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c index
>>> 8f4f29f18b8c..06711ac095f2 100644
>>> --- a/drivers/nvme/host/tcp.c
>>> +++ b/drivers/nvme/host/tcp.c
>>> @@ -62,6 +62,7 @@ enum nvme_tcp_queue_flags {
>>> NVME_TCP_Q_ALLOCATED = 0,
>>> NVME_TCP_Q_LIVE = 1,
>>> NVME_TCP_Q_POLLING = 2,
>>> + NVME_TCP_Q_OFFLOADS = 3,
>
> Sagi - following our discussion and your suggestions regarding the NVMeTCP Offload ULP module that we are working on at Marvell in which a TCP_OFFLOAD transport type would be added,
We still need to see how this pans out.. it's hard to predict if this is
the best approach before seeing the code. I'd suggest to share some code
so others can share their input.
> we are concerned that perhaps the generic term "offload" for both the transport type (for the Marvell work) and for the DDP and CRC offload queue (for the Mellanox work) may be misleading and confusing to developers and to users. Perhaps the naming should be "direct data placement", e.g. NVME_TCP_Q_DDP or NVME_TCP_Q_DIRECT?
We can call this NVME_TCP_Q_DDP, no issues with that.
Powered by blists - more mailing lists