[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20180424143923.26519-1-toshiaki.makita1@gmail.com>
Date: Tue, 24 Apr 2018 23:39:14 +0900
From: Toshiaki Makita <toshiaki.makita1@...il.com>
To: netdev@...r.kernel.org
Cc: Toshiaki Makita <makita.toshiaki@....ntt.co.jp>
Subject: [PATCH RFC 0/9] veth: Driver XDP
From: Toshiaki Makita <makita.toshiaki@....ntt.co.jp>
This patch set introduces driver XDP for veth.
Basically this is used in conjunction with redirect action of another XDP
program.
NIC -----------> veth===veth
(XDP) (redirect) (XDP)
In this case xdp_frame can be forwarded to the peer veth without
modification, so we can expect far better performance than generic XDP.
The envisioned use cases are:
* Container managed XDP program
Container host redirects frames to containers by XDP redirect action, and
privileged containers can deploy their own XDP programs.
* XDP program cascading
Two or more XDP programs can be called for each packet by redirecting
xdp frames to veth.
* Internal interface for an XDP bridge
When using XDP redirection to create a virtual bridge, veth can be used
to create an internal interface for the bridge.
With single core and simple XDP programs which only redirect and drop
packets, I got 10.5 Mpps redirect/drop rate with i40e 25G NIC + veth.
XXV710 (i40e) --- (XDP redirect) --> veth===veth (XDP drop)
This changeset is making use of NAPI to implement ndo_xdp_xmit and
XDP_TX/REDIRECT. This is mainly because I wanted to avoid stack inflation
by recursive calling of XDP programs.
As an RFC this has not implemented recently introduced xdp_adjust_tail
and based on top of Jesper's redirect memory return API patch set
(684009d4fdaf).
Any feedback is welcome. Thanks!
Toshiaki Makita (9):
net: Export skb_headers_offset_update and skb_copy_header
veth: Add driver XDP
veth: Avoid drops by oversized packets when XDP is enabled
veth: Use NAPI for XDP
veth: Handle xdp_frame in xdp napi ring
veth: Add ndo_xdp_xmit
veth: Add XDP TX and REDIRECT
veth: Avoid per-packet spinlock of XDP napi ring on dequeueing
veth: Avoid per-packet spinlock of XDP napi ring on enqueueing
drivers/net/veth.c | 688 +++++++++++++++++++++++++++++++++++++++++++++++--
include/linux/filter.h | 16 ++
include/linux/skbuff.h | 2 +
net/core/filter.c | 11 +-
net/core/skbuff.c | 12 +-
5 files changed, 699 insertions(+), 30 deletions(-)
--
2.14.3
Powered by blists - more mailing lists