[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190613141351.77747fc1@carbon>
Date: Thu, 13 Jun 2019 14:13:51 +0200
From: Jesper Dangaard Brouer <brouer@...hat.com>
To: Toshiaki Makita <toshiaki.makita1@...il.com>
Cc: brouer@...hat.com, Alexei Starovoitov <ast@...nel.org>,
Daniel Borkmann <daniel@...earbox.net>,
"David S. Miller" <davem@...emloft.net>,
Jakub Kicinski <jakub.kicinski@...ronome.com>,
Jesper Dangaard Brouer <hawk@...nel.org>,
John Fastabend <john.fastabend@...il.com>,
netdev@...r.kernel.org, xdp-newbies@...r.kernel.org,
bpf@...r.kernel.org,
Toke Høiland-Jørgensen <toke@...hat.com>,
Jason Wang <jasowang@...hat.com>
Subject: Re: [PATCH v3 bpf-next 2/2] veth: Support bulk XDP_TX
On Thu, 13 Jun 2019 18:39:59 +0900
Toshiaki Makita <toshiaki.makita1@...il.com> wrote:
> XDP_TX is similar to XDP_REDIRECT as it essentially redirects packets to
> the device itself. XDP_REDIRECT has bulk transmit mechanism to avoid the
> heavy cost of indirect call but it also reduces lock acquisition on the
> destination device that needs locks like veth and tun.
>
> XDP_TX does not use indirect calls but drivers which require locks can
> benefit from the bulk transmit for XDP_TX as well.
>
> This patch introduces bulk transmit mechanism in veth using bulk queue
> on stack, and improves XDP_TX performance by about 9%.
>
> Here are single-core/single-flow XDP_TX test results. CPU consumptions
> are taken from "perf report --no-child".
>
> - Before:
>
> 7.26 Mpps
>
> _raw_spin_lock 7.83%
> veth_xdp_xmit 12.23%
>
> - After:
>
> 7.94 Mpps
>
> _raw_spin_lock 1.08%
> veth_xdp_xmit 6.10%
>
> v2:
> - Use stack for bulk queue instead of a global variable.
>
> Signed-off-by: Toshiaki Makita <toshiaki.makita1@...il.com>
> ---
> drivers/net/veth.c | 60 +++++++++++++++++++++++++++++++++++++++++++-----------
> 1 file changed, 48 insertions(+), 12 deletions(-)
Acked-by: Jesper Dangaard Brouer <brouer@...hat.com>
--
Best regards,
Jesper Dangaard Brouer
MSc.CS, Principal Kernel Engineer at Red Hat
LinkedIn: http://www.linkedin.com/in/brouer
Powered by blists - more mailing lists