lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <65dc5bba-13e6-110a-ddae-3d0c260aa875@gmail.com>
Date:   Tue, 8 Dec 2020 17:38:53 -0700
From:   David Ahern <dsahern@...il.com>
To:     Boris Pismenny <borispismenny@...il.com>,
        Boris Pismenny <borisp@...lanox.com>, kuba@...nel.org,
        davem@...emloft.net, saeedm@...dia.com, hch@....de,
        sagi@...mberg.me, axboe@...com, kbusch@...nel.org,
        viro@...iv.linux.org.uk, edumazet@...gle.com
Cc:     boris.pismenny@...il.com, linux-nvme@...ts.infradead.org,
        netdev@...r.kernel.org, benishay@...dia.com, ogerlitz@...dia.com,
        yorayz@...dia.com, Ben Ben-Ishay <benishay@...lanox.com>,
        Or Gerlitz <ogerlitz@...lanox.com>,
        Yoray Zack <yorayz@...lanox.com>,
        Boris Pismenny <borisp@...dia.com>
Subject: Re: [PATCH v1 net-next 02/15] net: Introduce direct data placement
 tcp offload

On 12/8/20 7:36 AM, Boris Pismenny wrote:
> On 08/12/2020 2:42, David Ahern wrote:
>> On 12/7/20 2:06 PM, Boris Pismenny wrote:
>>> This commit introduces direct data placement offload for TCP.
>>> This capability is accompanied by new net_device operations that
>>> configure
>>> hardware contexts. There is a context per socket, and a context per DDP
>>> opreation. Additionally, a resynchronization routine is used to assist
>>> hardware handle TCP OOO, and continue the offload.
>>> Furthermore, we let the offloading driver advertise what is the max hw
>>> sectors/segments.
>>>
>>> Using this interface, the NIC hardware will scatter TCP payload directly
>>> to the BIO pages according to the command_id.
>>> To maintain the correctness of the network stack, the driver is expected
>>> to construct SKBs that point to the BIO pages.
>>>
>>> This, the SKB represents the data on the wire, while it is pointing
>>> to data that is already placed in the destination buffer.
>>> As a result, data from page frags should not be copied out to
>>> the linear part.
>>>
>>> As SKBs that use DDP are already very memory efficient, we modify
>>> skb_condence to avoid copying data from fragments to the linear
>>> part of SKBs that belong to a socket that uses DDP offload.
>>>
>>> A follow-up patch will use this interface for DDP in NVMe-TCP.
>>>
>>
>> You call this Direct Data Placement - which sounds like a marketing name.
>>
> 
> [Re-sending as the previous one didn't hit the mailing list. Sorry for the spam]
> 
> Interesting idea. But, unlike SKBTX_DEV_ZEROCOPY this SKB can be inspected/modified by the stack without the need to copy things out. Additionally, the SKB may contain both data that is already placed in its final destination buffer (PDU data) and data that isn't (PDU header); it doesn't matter. Therefore, labeling the entire SKB as zerocopy doesn't convey the desired information. Moreover, skipping copies in the stack to receive zerocopy SKBs will require more invasive changes.
> 
> Our goal in this approach was to provide the smallest change that enables the desired functionality while preserving the performance of existing flows that do not care for it. An alternative approach, that doesn't affect existing flows at all, which we considered was to make a special version of memcpy_to_page to be used by DDP providers (nvme-tcp). This alternative will require creating corresponding special versions for users of this function such skb_copy_datagram_iter. Thit is more invasive, thus in this patchset we decided to avoid it.
> 
>> Fundamentally, this starts with offloading TCP socket buffers for a
>> specific flow, so generically a TCP Rx zerocopy for kernel stack managed
>> sockets (as opposed to AF_XDP's zerocopy). Why is this not building in
>> that level of infrastructure first and adding ULPs like NVME on top?
>>
> 
> We aren't using AF_XDP or any of the Rx zerocopy infrastructure, because it is unsuitable for data placement for nvme-tcp, which reordes responses relatively to requests for efficiency and requires that data reside in specific destination buffers.
> 
> 

The AF_XDP reference was to differentiate one zerocopy use case (all
packets go to userspace) from another (kernel managed TCP socket with
zerocopy payload). You are focusing on a very narrow use case - kernel
based NVMe over TCP - of a more general problem.

You have a TCP socket and a design that only works for kernel owned
sockets. You have specialized queues in the NIC, a flow rule directing
packets to those queues. Presumably some ULP parser in the NIC
associated with the queues to process NVMe packets. Rather than copying
headers (ethernet/ip/tcp) to one buffer and payload to another (which is
similar to what Jonathan Lemon is working on), this design has a ULP
processor that just splits out the TCP payload even more making it
highly selective about which part of the packet is put into which
buffer. Take out the NVMe part, and it is header split with zerocopy for
the payload - a generic feature that can have a wider impact with NVMe
as a special case.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ