lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <d0c9c052-6153-6336-f296-15ad5611f21b@gmail.com>
Date:   Wed, 24 Apr 2019 08:59:13 -0700
From:   Eric Dumazet <eric.dumazet@...il.com>
To:     Maxim Uvarov <maxim.uvarov@...aro.org>, netdev@...r.kernel.org,
        linux-kernel@...r.kernel.org
Subject: Re: RFC: zero copy recv()



On 04/23/2019 11:23 PM, Maxim Uvarov wrote:
> Hello,
> 
> On different conferences I see that people are trying to accelerate
> network with putting packet processing with protocol level completely
> to user space. It might be DPDK, ODP or AF_XDP  plus some network
> stack on top of it. Then people are trying to test this solution with
> some existence applications. And in better way do not modify
> application binaries and just LD_PRELOAD sockets syscalls (recv(),
> sendto() and etc). Current recv() expects that application allocates
> memory and call will "copy" packet to that memory. Copy per packet is
> slow.  Can we consider about implementing zero copy API calls
> friendly? Can this change be accepted to kernel?

Generic zero copy is hard.

As soon as you have multiple consumers in different domains for the data,
you need some kind of multiplexing, typically using hardware capabilities.

For TCP, we implemented zero copy last year, which works quite well
on x86 if your network uses MTU of 4096+headers.

tools/testing/selftests/net/tcp_mmap.c  reaches line rate (100Gbit) on
a single TCP flow, if using a NIC able to perform header split.

But the model is not to run a legacy application with some LD_PRELOAD
hack/magic, sorry.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ