[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20210120041628.GH1421720@Leo-laptop-t470s>
Date: Wed, 20 Jan 2021 12:16:28 +0800
From: Hangbin Liu <liuhangbin@...il.com>
To: Jesper Dangaard Brouer <brouer@...hat.com>
Cc: bpf@...r.kernel.org, netdev@...r.kernel.org,
Daniel Borkmann <daniel@...earbox.net>,
John Fastabend <john.fastabend@...il.com>,
Yonghong Song <yhs@...com>,
Toke Høiland-Jørgensen <toke@...hat.com>
Subject: Re: [PATCHv8 bpf-next] samples/bpf: add xdp program on egress for
xdp_redirect_map
On Tue, Jan 19, 2021 at 03:51:27PM +0100, Jesper Dangaard Brouer wrote:
> > @@ -73,13 +90,63 @@ int xdp_redirect_map_prog(struct xdp_md *ctx)
> >
> > /* count packet in global counter */
> > value = bpf_map_lookup_elem(&rxcnt, &key);
> > - if (value)
> > + if (value) {
> > *value += 1;
> > + if (*value % 2 == 1)
> > + vport = 1;
>
> This will also change the base behavior of the program, e.g when we are
> not testing the 2nd xdp-prog. It will become hard to compare the
> performance between xdp_redirect and xdp_redirect_map.
I just did a test with/without this patch on 5.10 using pktgen
By ./xdp_redirect_map -N/-S eno1 eno1
Without this patch
- S 1.8M pps
- N 7.4M pps
With this patch
- S 1.9M pps
- N 7.4M pps
So I don't see much difference.
>
> It looks like you are populating vport=0 and vport=1 with the same ifindex.
> Thus, this code is basically doing packet reordering, due to the per
> CPU bulking layer (of 16 packets) in devmap.
> Is this the intended behavior?
I didn't expect this could cause reordering. If we want only do it when
attached the 2nd prog, we need add a check like
key = 1;
value = bpf_map_lookup_elem(&tx_port_native , &key);
if (value)
do_2nd_prog_test
else
do_nothing_and_redirect_with_port_0
But an extra bpf_map_lookup_elem() for every packet may cause more performance
drop. So WDYT?
Thanks
Hangbin
Powered by blists - more mailing lists