lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ab298844-c95e-43e6-b4bb-fe5ce78655d8@gmail.com>
Date:   Wed, 9 Dec 2020 10:15:56 +0200
From:   Boris Pismenny <borispismenny@...il.com>
To:     David Ahern <dsahern@...il.com>,
        Boris Pismenny <borisp@...lanox.com>, kuba@...nel.org,
        davem@...emloft.net, saeedm@...dia.com, hch@....de,
        sagi@...mberg.me, axboe@...com, kbusch@...nel.org,
        viro@...iv.linux.org.uk, edumazet@...gle.com
Cc:     boris.pismenny@...il.com, linux-nvme@...ts.infradead.org,
        netdev@...r.kernel.org, benishay@...dia.com, ogerlitz@...dia.com,
        yorayz@...dia.com, Ben Ben-Ishay <benishay@...lanox.com>,
        Or Gerlitz <ogerlitz@...lanox.com>,
        Yoray Zack <yorayz@...lanox.com>,
        Boris Pismenny <borisp@...dia.com>
Subject: Re: [PATCH v1 net-next 02/15] net: Introduce direct data placement
 tcp offload

On 09/12/2020 2:38, David Ahern wrote:
> 
> The AF_XDP reference was to differentiate one zerocopy use case (all
> packets go to userspace) from another (kernel managed TCP socket with
> zerocopy payload). You are focusing on a very narrow use case - kernel
> based NVMe over TCP - of a more general problem.
> 

Please note that although our framework implements support for nvme-tcp,
we designed it to fit iscsi as well, and hopefully future protocols too,
as general as we could. For why this could not be generalized further
see below.

> You have a TCP socket and a design that only works for kernel owned
> sockets. You have specialized queues in the NIC, a flow rule directing
> packets to those queues. Presumably some ULP parser in the NIC
> associated with the queues to process NVMe packets. Rather than copying
> headers (ethernet/ip/tcp) to one buffer and payload to another (which is
> similar to what Jonathan Lemon is working on), this design has a ULP
> processor that just splits out the TCP payload even more making it
> highly selective about which part of the packet is put into which
> buffer. Take out the NVMe part, and it is header split with zerocopy for
> the payload - a generic feature that can have a wider impact with NVMe
> as a special case.
> 

There is more to this than TCP zerocopy that exists in userspace or
inside the kernel. First, please note that the patches include support for
CRC offload as well as data placement. Second, data-placement is not the same
as zerocopy for the following reasons:
(1) The former places buffers *exactly* where the user requests
regardless of the order of response arrivals, while the latter places packets
in anonymous buffers according to packet arrival order. Therefore, zerocopy
can be implemented using data placement, but not vice versa.
(2) Data-placement supports sub-page zerocopy, unlike page-flipping
techniques (i.e., TCP_ZEROCOPY).
(3) Page-flipping can't work for any storage initiator because the
destination buffer is owned by some user pagecache or process using O_DIRECT.
(4) Storage over TCP PDUs are not necessarily aligned to TCP packets,
i.e., the PDU header can be in the middle of a packet, so header-data split
alone isn't enough.

I wish we could do the same using some simpler zerocopy mechanism,
it would indeed simplify things. But, unfortunately this would severely
restrict generality, no sub-page support and alignment between PDUs
and packets, and performance (ordering of PDUs).

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ