[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAPhsuW4QTOgC+fDYRZnVwWtt3NTS9D+56mpP04Kh3tHrkD7G1A@mail.gmail.com>
Date: Thu, 1 Apr 2021 09:40:54 -0700
From: Song Liu <song@...nel.org>
To: Lorenzo Bianconi <lorenzo@...nel.org>
Cc: bpf <bpf@...r.kernel.org>, Networking <netdev@...r.kernel.org>,
lorenzo.bianconi@...hat.com,
"David S . Miller" <davem@...emloft.net>,
Jakub Kicinski <kuba@...nel.org>,
Alexei Starovoitov <ast@...nel.org>,
Daniel Borkmann <daniel@...earbox.net>,
Jesper Dangaard Brouer <brouer@...hat.com>
Subject: Re: [PATCH bpf-next] cpumap: bulk skb using netif_receive_skb_list
On Thu, Apr 1, 2021 at 1:57 AM Lorenzo Bianconi <lorenzo@...nel.org> wrote:
>
> Rely on netif_receive_skb_list routine to send skbs converted from
> xdp_frames in cpu_map_kthread_run in order to improve i-cache usage.
> The proposed patch has been tested running xdp_redirect_cpu bpf sample
> available in the kernel tree that is used to redirect UDP frames from
> ixgbe driver to a cpumap entry and then to the networking stack.
> UDP frames are generated using pkt_gen.
>
> $xdp_redirect_cpu --cpu <cpu> --progname xdp_cpu_map0 --dev <eth>
>
> bpf-next: ~2.2Mpps
> bpf-next + cpumap skb-list: ~3.15Mpps
>
> Signed-off-by: Lorenzo Bianconi <lorenzo@...nel.org>
> ---
> kernel/bpf/cpumap.c | 9 ++++-----
> 1 file changed, 4 insertions(+), 5 deletions(-)
>
> diff --git a/kernel/bpf/cpumap.c b/kernel/bpf/cpumap.c
> index 0cf2791d5099..b33114ce2e2b 100644
> --- a/kernel/bpf/cpumap.c
> +++ b/kernel/bpf/cpumap.c
> @@ -257,6 +257,7 @@ static int cpu_map_kthread_run(void *data)
> void *frames[CPUMAP_BATCH];
> void *skbs[CPUMAP_BATCH];
> int i, n, m, nframes;
> + LIST_HEAD(list);
>
> /* Release CPU reschedule checks */
> if (__ptr_ring_empty(rcpu->queue)) {
> @@ -305,7 +306,6 @@ static int cpu_map_kthread_run(void *data)
> for (i = 0; i < nframes; i++) {
> struct xdp_frame *xdpf = frames[i];
> struct sk_buff *skb = skbs[i];
> - int ret;
>
> skb = __xdp_build_skb_from_frame(xdpf, skb,
> xdpf->dev_rx);
> @@ -314,11 +314,10 @@ static int cpu_map_kthread_run(void *data)
> continue;
> }
>
> - /* Inject into network stack */
> - ret = netif_receive_skb_core(skb);
> - if (ret == NET_RX_DROP)
> - drops++;
I guess we stop tracking "drops" with this patch?
Thanks,
Song
> + list_add_tail(&skb->list, &list);
> }
> + netif_receive_skb_list(&list);
> +
> /* Feedback loop via tracepoint */
> trace_xdp_cpumap_kthread(rcpu->map_id, n, drops, sched, &stats);
>
> --
> 2.30.2
>
Powered by blists - more mailing lists