lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAJ8uoz2HGawFdSuGs_1cZ9uEDTPoMHr5-rXK+JOETD3oGwvoFw@mail.gmail.com>
Date:   Mon, 1 Jul 2019 13:01:29 +0200
From:   Magnus Karlsson <magnus.karlsson@...il.com>
To:     Jonathan Lemon <jonathan.lemon@...il.com>
Cc:     Network Development <netdev@...r.kernel.org>,
        Björn Töpel <bjorn.topel@...el.com>,
        "Karlsson, Magnus" <magnus.karlsson@...el.com>,
        Jakub Kicinski <jakub.kicinski@...ronome.com>,
        jeffrey.t.kirsher@...el.com, kernel-team@...com
Subject: Re: [PATCH 0/3 bpf-next] intel: AF_XDP support for TX of RX packets

On Sat, Jun 29, 2019 at 12:18 AM Jonathan Lemon
<jonathan.lemon@...il.com> wrote:
>
> NOTE: This patch depends on my previous "xsk: reuse cleanup" patch,
> sent to netdev earlier.
>
> The motivation is to have packets which were received on a zero-copy
> AF_XDP socket, and which returned a TX verdict from the bpf program,
> queued directly on the TX ring (if they're in the same napi context).
>
> When these TX packets are completed, they are placed back onto the
> reuse queue, as there isn't really any other place to handle them.
>
> Space in the reuse queue is preallocated at init time for both the
> RX and TX rings.  Another option would have a smaller TX queue size
> and count in-flight TX packets, dropping any which exceed the reuseq
> size - this approach is omitted for simplicity.

This should speed up XDP_TX under ZC substantially, which of course is
a good thing. Would be great if you could add some performance
numbers.

As other people have pointed out, it would have been great if we had a
page pool we could return the buffers to. But we do not so there are
only two options: keep it in the kernel on the reuse queue in this
case, or return the buffer to user space with a length of zero
indicating that there is no packet data. Just a transfer of ownership.
But let us go with the former one as you have done in this patch set,
as we have so far have always tried to reuse the buffers inside the
kernel. But the latter option might be good to have in store as a
solution for other problems.

/Magnus

>
> Jonathan Lemon (3):
>   net: add convert_to_xdp_frame_keep_zc function
>   i40e: Support zero-copy XDP_TX on the RX path for AF_XDP sockets.
>   ixgbe: Support zero-copy XDP_TX on the RX path for AF_XDP sockets.
>
>  drivers/net/ethernet/intel/i40e/i40e_txrx.h  |  1 +
>  drivers/net/ethernet/intel/i40e/i40e_xsk.c   | 54 ++++++++++++--
>  drivers/net/ethernet/intel/ixgbe/ixgbe.h     |  1 +
>  drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c | 74 +++++++++++++++++---
>  include/net/xdp.h                            | 20 ++++--
>  5 files changed, 134 insertions(+), 16 deletions(-)
>
> --
> 2.17.1
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ