lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 11 Jun 2018 01:02:08 +0900
From:   Toshiaki Makita <toshiaki.makita1@...il.com>
To:     netdev@...r.kernel.org
Cc:     Toshiaki Makita <makita.toshiaki@....ntt.co.jp>,
        Jesper Dangaard Brouer <brouer@...hat.com>,
        Alexei Starovoitov <ast@...nel.org>,
        Daniel Borkmann <daniel@...earbox.net>
Subject: [PATCH RFC v2 0/9] veth: Driver XDP

From: Toshiaki Makita <makita.toshiaki@....ntt.co.jp>

This patch set introduces driver XDP for veth.
Basically this is used in conjunction with redirect action of another XDP
program.

  NIC -----------> veth===veth
 (XDP) (redirect)        (XDP)

In this case xdp_frame can be forwarded to the peer veth without
modification, so we can expect far better performance than generic XDP.

The envisioned use cases are:

* Container managed XDP program
Container host redirects frames to containers by XDP redirect action, and
privileged containers can deploy their own XDP programs.

* XDP program cascading
Two or more XDP programs can be called for each packet by redirecting
xdp frames to veth.

* Internal interface for an XDP bridge
When using XDP redirection to create a virtual bridge, veth can be used
to create an internal interface for the bridge.

With single core and simple XDP programs which only redirect and drop
packets, I got 10.5 Mpps redirect/drop rate with i40e 25G NIC + veth.

XXV710 (i40e) --- (XDP redirect) --> veth===veth (XDP drop)

This changeset is making use of NAPI to implement ndo_xdp_xmit and
XDP_TX/REDIRECT. This is mainly because XDP heavily relies on NAPI
context.

This patchset is based on top of net-next commit 75d4e704fa8d
(netdev-FAQ: clarify DaveM's position for stable backports).
Any feedback is welcome. Thanks!

v2:
- Squash NAPI patch with "Add driver XDP" patch.
- Remove conversion from xdp_frame to skb when NAPI is not enabled.
- Introduce per-queue XDP ring (patch 8).
- Introduce bulk skb xmit when XDP is enabled on the peer (patch 9).

Toshiaki Makita (9):
  net: Export skb_headers_offset_update
  veth: Add driver XDP
  veth: Avoid drops by oversized packets when XDP is enabled
  veth: Add another napi ring for ndo_xdp_xmit and handle xdp_frames
  veth: Add ndo_xdp_xmit
  xdp: Add a flag for disabling napi_direct of xdp_return_frame in
    xdp_mem_info
  veth: Add XDP TX and REDIRECT
  veth: Support per queue XDP ring
  veth: Bulk skb xmit for XDP path

 drivers/net/veth.c     | 734 ++++++++++++++++++++++++++++++++++++++++++++++++-
 include/linux/filter.h |  16 ++
 include/linux/skbuff.h |   1 +
 include/net/xdp.h      |   4 +
 net/core/filter.c      |  11 +-
 net/core/skbuff.c      |   3 +-
 net/core/xdp.c         |   6 +-
 7 files changed, 753 insertions(+), 22 deletions(-)

-- 
2.14.3

Powered by blists - more mailing lists