lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:   Sun, 8 Nov 2020 06:51:43 +0000
From:   Shai Malin <smalin@...vell.com>
To:     "linux-nvme@...ts.infradead.org" <linux-nvme@...ts.infradead.org>,
        "Sagi Grimberg" <sagi@...mberg.me>,
        Boris Pismenny <borispismenny@...il.com>,
        Boris Pismenny <borisp@...lanox.com>,
        "kuba@...nel.org" <kuba@...nel.org>,
        "davem@...emloft.net" <davem@...emloft.net>,
        "saeedm@...dia.com" <saeedm@...dia.com>, "hch@....de" <hch@....de>,
        "axboe@...com" <axboe@...com>,
        "kbusch@...nel.org" <kbusch@...nel.org>,
        "viro@...iv.linux.org.uk" <viro@...iv.linux.org.uk>,
        "edumazet@...gle.com" <edumazet@...gle.com>
CC:     Yoray Zack <yorayz@...lanox.com>,
        Ben Ben-Ishay <benishay@...lanox.com>,
        "boris.pismenny@...il.com" <boris.pismenny@...il.com>,
        "linux-nvme@...ts.infradead.org" <linux-nvme@...ts.infradead.org>,
        "netdev@...r.kernel.org" <netdev@...r.kernel.org>,
        Or Gerlitz <ogerlitz@...lanox.com>,
        Ariel Elior <aelior@...vell.com>,
        Michal Kalderon <mkalderon@...vell.com>
Subject: RE: [PATCH net-next RFC v1 05/10] nvme-tcp: Add DDP offload control
 path


On 09/10/2020 1:19, Sagi Grimberg wrote:
> On 9/30/20 9:20 AM, Boris Pismenny wrote:
> > This commit introduces direct data placement offload to NVME TCP.
> > There is a context per queue, which is established after the 
> > handshake using the tcp_ddp_sk_add/del NDOs.
> >
> > Additionally, a resynchronization routine is used to assist hardware 
> > recovery from TCP OOO, and continue the offload.
> > Resynchronization operates as follows:
> > 1. TCP OOO causes the NIC HW to stop the offload 2. NIC HW 
> > identifies a PDU header at some TCP sequence number, and asks 
> > NVMe-TCP to
> confirm
> > it.
> > This request is delivered from the NIC driver to NVMe-TCP by first 
> > finding the socket for the packet that triggered the request, and 
> > then fiding the nvme_tcp_queue that is used by this routine.
> > Finally, the request is recorded in the nvme_tcp_queue.
> > 3. When NVMe-TCP observes the requested TCP sequence, it will 
> > compare it with the PDU header TCP sequence, and report the result 
> > to the NIC driver (tcp_ddp_resync), which will update the HW, and 
> > resume offload when all is successful.
> >
> > Furthermore, we let the offloading driver advertise what is the max 
> > hw sectors/segments via tcp_ddp_limits.
> >
> > A follow-up patch introduces the data-path changes required for this 
> > offload.
> >
> > Signed-off-by: Boris Pismenny <borisp@...lanox.com>
> > Signed-off-by: Ben Ben-Ishay <benishay@...lanox.com>
> > Signed-off-by: Or Gerlitz <ogerlitz@...lanox.com>
> > Signed-off-by: Yoray Zack <yorayz@...lanox.com>
> > ---
> >   drivers/nvme/host/tcp.c  | 188
> +++++++++++++++++++++++++++++++++++++++
> >   include/linux/nvme-tcp.h |   2 +
> >   2 files changed, 190 insertions(+)
> >
> > diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c index
> > 8f4f29f18b8c..06711ac095f2 100644
> > --- a/drivers/nvme/host/tcp.c
> > +++ b/drivers/nvme/host/tcp.c
> > @@ -62,6 +62,7 @@ enum nvme_tcp_queue_flags {
> >   	NVME_TCP_Q_ALLOCATED	= 0,
> >   	NVME_TCP_Q_LIVE		= 1,
> >   	NVME_TCP_Q_POLLING	= 2,
> > +	NVME_TCP_Q_OFFLOADS     = 3,

Sagi - following our discussion and your suggestions regarding the NVMeTCP Offload ULP module that we are working on at Marvell in which a TCP_OFFLOAD transport type would be added, we are concerned that perhaps the generic term "offload" for both the transport type (for the Marvell work) and for the DDP and CRC offload queue (for the Mellanox work) may be misleading and confusing to developers and to users. Perhaps the naming should be "direct data placement", e.g. NVME_TCP_Q_DDP or NVME_TCP_Q_DIRECT?
Also, no need to quote the entire patch. Just a few lines above your response like I did here.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ