[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20210422185332.3199ca2e@carbon>
Date: Thu, 22 Apr 2021 18:53:32 +0200
From: Jesper Dangaard Brouer <brouer@...hat.com>
To: Hangbin Liu <liuhangbin@...il.com>
Cc: bpf@...r.kernel.org, netdev@...r.kernel.org,
Toke Høiland-Jørgensen
<toke@...hat.com>, Jiri Benc <jbenc@...hat.com>,
Eelco Chaudron <echaudro@...hat.com>, ast@...nel.org,
Daniel Borkmann <daniel@...earbox.net>,
Lorenzo Bianconi <lorenzo.bianconi@...hat.com>,
David Ahern <dsahern@...il.com>,
Andrii Nakryiko <andrii.nakryiko@...il.com>,
Alexei Starovoitov <alexei.starovoitov@...il.com>,
John Fastabend <john.fastabend@...il.com>,
Maciej Fijalkowski <maciej.fijalkowski@...el.com>,
Björn Töpel
<bjorn.topel@...il.com>, Martin KaFai Lau <kafai@...com>,
brouer@...hat.com
Subject: Re: [PATCHv9 bpf-next 2/4] xdp: extend xdp_redirect_map with
broadcast support
On Thu, 22 Apr 2021 15:14:52 +0800
Hangbin Liu <liuhangbin@...il.com> wrote:
> diff --git a/net/core/filter.c b/net/core/filter.c
> index cae56d08a670..afec192c3b21 100644
> --- a/net/core/filter.c
> +++ b/net/core/filter.c
[...]
> int xdp_do_redirect(struct net_device *dev, struct xdp_buff *xdp,
> struct bpf_prog *xdp_prog)
> {
> @@ -3933,6 +3950,7 @@ int xdp_do_redirect(struct net_device *dev, struct xdp_buff *xdp,
> enum bpf_map_type map_type = ri->map_type;
> void *fwd = ri->tgt_value;
> u32 map_id = ri->map_id;
> + struct bpf_map *map;
> int err;
>
> ri->map_id = 0; /* Valid map id idr range: [1,INT_MAX[ */
> @@ -3942,7 +3960,12 @@ int xdp_do_redirect(struct net_device *dev, struct xdp_buff *xdp,
> case BPF_MAP_TYPE_DEVMAP:
> fallthrough;
> case BPF_MAP_TYPE_DEVMAP_HASH:
> - err = dev_map_enqueue(fwd, xdp, dev);
> + map = xchg(&ri->map, NULL);
Hmm, this looks dangerous for performance to have on this fast-path.
The xchg call can be expensive, AFAIK this is an atomic operation.
> + if (map)
> + err = dev_map_enqueue_multi(xdp, dev, map,
> + ri->flags & BPF_F_EXCLUDE_INGRESS);
> + else
> + err = dev_map_enqueue(fwd, xdp, dev);
> break;
> case BPF_MAP_TYPE_CPUMAP:
> err = cpu_map_enqueue(fwd, xdp, dev);
> @@ -3984,13 +4007,19 @@ static int xdp_do_generic_redirect_map(struct net_device *dev,
> enum bpf_map_type map_type, u32 map_id)
> {
> struct bpf_redirect_info *ri = this_cpu_ptr(&bpf_redirect_info);
> + struct bpf_map *map;
> int err;
>
> switch (map_type) {
> case BPF_MAP_TYPE_DEVMAP:
> fallthrough;
> case BPF_MAP_TYPE_DEVMAP_HASH:
> - err = dev_map_generic_redirect(fwd, skb, xdp_prog);
> + map = xchg(&ri->map, NULL);
Same here!
> + if (map)
> + err = dev_map_redirect_multi(dev, skb, xdp_prog, map,
> + ri->flags & BPF_F_EXCLUDE_INGRESS);
> + else
> + err = dev_map_generic_redirect(fwd, skb, xdp_prog);
> if (unlikely(err))
> goto err;
> break;
--
Best regards,
Jesper Dangaard Brouer
MSc.CS, Principal Kernel Engineer at Red Hat
LinkedIn: http://www.linkedin.com/in/brouer
Powered by blists - more mailing lists