[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAJ+HfNj8M-GEceY0CJw-gMs6mjNaQsw-3nUggwtUTaxGtsuw6Q@mail.gmail.com>
Date: Tue, 18 Dec 2018 15:14:42 +0100
From: Björn Töpel <bjorn.topel@...il.com>
To: William Tu <u9012063@...il.com>
Cc: Magnus Karlsson <magnus.karlsson@...il.com>, ast@...nel.org,
Daniel Borkmann <daniel@...earbox.net>,
Netdev <netdev@...r.kernel.org>, makita.toshiaki@....ntt.co.jp,
"Karlsson, Magnus" <magnus.karlsson@...el.com>
Subject: Re: [bpf-next RFC 0/3] AF_XDP support for veth.
Den mån 17 dec. 2018 kl 20:40 skrev William Tu <u9012063@...il.com>:
>
> The patch series adds AF_XDP async xmit support for veth device.
> First patch add a new API for supporting non-physical NIC device to get
> packet's virtual address. The second patch implements the async xmit,
> and last patch adds example use cases.
>
The first virtual device with AF_XDP support! Yay!
This is only the zero-copy on the Tx side -- it's still allocations
plus copy on the ingress side? That's a bit different from the
i40e/ixgbe implementation, where zero-copy means both Tx and Rx. For
veth I don't see that we need to support Rx right away, especially for
Tx only sockets. Still, when the netdev has accepted the umem via
ndo_bpf, the zero-copy for both Tx and Rx is assumed. We might want to
change the ndo_bpf at some point to support zero-copy for Tx, Rx, Tx
*and* Rx.
Are you planning to add zero-copy to the ingress side, i.e. pulling
frames from the fill ring, instead of allocating via dev_alloc_page?
(The term *zero-copy* for veth is a bit weird, since we're still doing
copies, but eliding the page allocation. :-))
It would be interesting to hear a bit about what use-case veth/AF_XDP
has, if you can share that.
Cheers,
Björn
> I tested with 2 namespaces, one as sender, the other as receiver.
> The packet rate is measure at the receiver side.
> ip netns add at_ns0
> ip link add p0 type veth peer name p1
> ip link set p0 netns at_ns0
> ip link set dev p1 up
> ip netns exec at_ns0 ip link set dev p0 up
>
> # receiver
> ip netns exec at_ns0 xdp_rxq_info --dev p0 --action XDP_DROP
>
> # sender with AF_XDP
> xdpsock -i p1 -t -N -z
>
> # or sender without AF_XDP
> xdpsock -i p1 -t -S
>
> Without AF_XDP: 724 Kpps
> RXQ stats RXQ:CPU pps issue-pps
> rx_queue_index 0:1 724339 0
> rx_queue_index 0:sum 724339
>
> With AF_XDP: 1.1 Mpps (with ksoftirqd 100% cpu)
> RXQ stats RXQ:CPU pps issue-pps
> rx_queue_index 0:3 1188181 0
> rx_queue_index 0:sum 1188181
>
> William Tu (3):
> xsk: add xsk_umem_consume_tx_virtual.
> veth: support AF_XDP.
> samples: bpf: add veth AF_XDP example.
>
> drivers/net/veth.c | 247 ++++++++++++++++++++++++++++++++++++++++-
> include/net/xdp_sock.h | 7 ++
> net/xdp/xsk.c | 24 ++++
> samples/bpf/test_veth_afxdp.sh | 67 +++++++++++
> 4 files changed, 343 insertions(+), 2 deletions(-)
> create mode 100755 samples/bpf/test_veth_afxdp.sh
>
> --
> 2.7.4
>
Powered by blists - more mailing lists