[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200623165612.2596954b@carbon>
Date: Tue, 23 Jun 2020 16:56:12 +0200
From: Jesper Dangaard Brouer <brouer@...hat.com>
To: Lorenzo Bianconi <lorenzo@...nel.org>
Cc: bpf@...r.kernel.org, netdev@...r.kernel.org, davem@...emloft.net,
ast@...nel.org, daniel@...earbox.net, toke@...hat.com,
lorenzo.bianconi@...hat.com, dsahern@...nel.org, brouer@...hat.com
Subject: Re: [PATCH v2 bpf-next 4/8] bpf: cpumap: add the possibility to
attach an eBPF program to cpumap
On Sat, 20 Jun 2020 00:57:20 +0200
Lorenzo Bianconi <lorenzo@...nel.org> wrote:
> @@ -273,16 +336,20 @@ static int cpu_map_kthread_run(void *data)
> prefetchw(page);
> }
>
> - m = kmem_cache_alloc_bulk(skbuff_head_cache, gfp, n, skbs);
> + /* Support running another XDP prog on this CPU */
> + nframes = cpu_map_bpf_prog_run_xdp(rcpu, xdp_frames, n, &stats);
> +
If all frames are dropped my XDP program, then we will call
kmem_cache_alloc_bulk() to allocate zero elements. I found this during
my testing[1], and I think we should squash my proposed change in[1].
> + m = kmem_cache_alloc_bulk(skbuff_head_cache, gfp,
> + nframes, skbs);
> if (unlikely(m == 0)) {
> - for (i = 0; i < n; i++)
> + for (i = 0; i < nframes; i++)
> skbs[i] = NULL; /* effect: xdp_return_frame */
> - drops = n;
> + drops += nframes;
> }
>
> local_bh_disable();
> - for (i = 0; i < n; i++) {
> - struct xdp_frame *xdpf = frames[i];
> + for (i = 0; i < nframes; i++) {
> + struct xdp_frame *xdpf = xdp_frames[i];
> struct sk_buff *skb = skbs[i];
> int ret;
[1] https://github.com/xdp-project/xdp-project/blob/master/areas/cpumap/cpumap04-map-xdp-prog.org#observations
--
Best regards,
Jesper Dangaard Brouer
MSc.CS, Principal Kernel Engineer at Red Hat
LinkedIn: http://www.linkedin.com/in/brouer
Powered by blists - more mailing lists