lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20210922075613.12186-1-magnus.karlsson@gmail.com>
Date:   Wed, 22 Sep 2021 09:56:00 +0200
From:   Magnus Karlsson <magnus.karlsson@...il.com>
To:     magnus.karlsson@...el.com, bjorn@...nel.org, ast@...nel.org,
        daniel@...earbox.net, netdev@...r.kernel.org,
        maciej.fijalkowski@...el.com, ciara.loftus@...el.com
Cc:     Magnus Karlsson <magnus.karlsson@...il.com>,
        jonathan.lemon@...il.com, bpf@...r.kernel.org,
        anthony.l.nguyen@...el.com
Subject: [PATCH bpf-next 00/13] xsk: i40e: ice: introduce batching for Rx buffer allocation

This patch set introduces a batched interface for Rx buffer allocation
in AF_XDP buffer pool. Instead of using xsk_buff_alloc(*pool), drivers
can now use xsk_buff_alloc_batch(*pool, **xdp_buff_array,
max). Instead of returning a pointer to an xdp_buff, it returns the
number of xdp_buffs it managed to allocate up to the maximum value of
the max parameter in the function call. Pointers to the allocated
xdp_buff:s are put in the xdp_buff_array supplied in the call. This
could be a SW ring that already exists in the driver or a new
structure that the driver has allocated.

u32 xsk_buff_alloc_batch(struct xsk_buff_pool *pool,
                         struct xdp_buff **xdp,
                         u32 max);

When using this interface, the driver should also use the new
interface below to set the relevant fields in the struct xdp_buff. The
reason for this is that xsk_buff_alloc_batch() does not fill in the
data and data_meta fields for you as is the case with
xsk_buff_alloc(). So it is not sufficient to just set data_end
(effectively the size) anymore in the driver. The reason for this is
performance as explained in detail in the commit message.

void xsk_buff_set_size(struct xdp_buff *xdp, u32 size);

Patch 6 also optimizes the buffer allocation in the aligned case. In
this case, we can skip the reinitialization of most fields in the
xdp_buff_xsk struct at allocation time. As the number of elements in
the heads array is equal to the number of possible buffers in the
umem, we can initialize them once and for all at bind time and then
just point to the correct one in the xdp_buff_array that is returned
to the driver. No reason to have a stack of free head entries. In the
unaligned case, the buffers can reside anywhere in the umem, so this
optimization is not possible as we still have to fill in the right
information in the xdp_buff every single time one is allocated.

I have updated i40e and ice to use this new batched interface.

These are the throughput results on my 2.1 GHz Cascade Lake system:

Aligned mode:
ice: +11% / -9 cycles/pkt
i40e: +12% / -9 cycles/pkt

Unaligned mode:
ice: +1.5% / -1 cycle/pkt
i40e: +1% / -1 cycle/pkt

For the aligned case, batching provides around 40% of the performance
improvement and the aligned optimization the rest, around 60%. Would
have expected a ~4% boost for unaligned with this data, but I only get
around 1%. Do not know why. Note that memory consumption in aligned
mode is also reduced by this patch set.

Structure of the patch set:

Patch 1: Removes an unused entry from xdp_buff_xsk.
Patch 2: Introduce the batched buffer allocation API and implementation.
Patch 3-4: Use the batched allocation interface for ice.
Patch 5: Use the batched allocation interface for i40e.
Patch 6: Optimize the buffer allocation for the aligned case.
Patch 7-10: Fix some issues with the tests that were found while
            implementing the two new tests below.
Patch 11-13: Implement two new tests: single packet and headroom validation.

Thanks: Magnus

Magnus Karlsson (13):
  xsk: get rid of unused entry in struct xdp_buff_xsk
  xsk: batched buffer allocation for the pool
  ice: use xdp_buf instead of rx_buf for xsk zero-copy
  ice: use the xsk batched rx allocation interface
  i40e: use the xsk batched rx allocation interface
  xsk: optimize for aligned case
  selftests: xsk: fix missing initialization
  selftests: xsk: put the same buffer only once in the fill ring
  selftests: xsk: fix socket creation retry
  selftests: xsk: introduce pacing of traffic
  selftests: xsk: add single packet test
  selftests: xsk: change interleaving of packets in unaligned mode
  selftests: xsk: add frame_headroom test

 drivers/net/ethernet/intel/i40e/i40e_xsk.c |  52 ++++----
 drivers/net/ethernet/intel/ice/ice_txrx.h  |  16 +--
 drivers/net/ethernet/intel/ice/ice_xsk.c   |  92 +++++++-------
 include/net/xdp_sock_drv.h                 |  22 ++++
 include/net/xsk_buff_pool.h                |  48 +++++++-
 net/xdp/xsk.c                              |  15 ---
 net/xdp/xsk_buff_pool.c                    | 131 +++++++++++++++++---
 net/xdp/xsk_queue.h                        |  12 +-
 tools/testing/selftests/bpf/xdpxceiver.c   | 133 ++++++++++++++++-----
 tools/testing/selftests/bpf/xdpxceiver.h   |  11 +-
 10 files changed, 376 insertions(+), 156 deletions(-)


base-commit: 17b52c226a9a170f1611f69d12a71be05748aefd
--
2.29.0

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ