[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200603024054.GK102436@dhcp-12-153.nay.redhat.com>
Date: Wed, 3 Jun 2020 10:40:54 +0800
From: Hangbin Liu <liuhangbin@...il.com>
To: Toke Høiland-Jørgensen <toke@...hat.com>
Cc: bpf@...r.kernel.org, netdev@...r.kernel.org,
Jiri Benc <jbenc@...hat.com>,
Jesper Dangaard Brouer <brouer@...hat.com>,
Eelco Chaudron <echaudro@...hat.com>, ast@...nel.org,
Daniel Borkmann <daniel@...earbox.net>,
Lorenzo Bianconi <lorenzo.bianconi@...hat.com>
Subject: Re: [PATCHv4 bpf-next 0/2] xdp: add dev map multicast support
On Wed, May 27, 2020 at 12:21:54PM +0200, Toke Høiland-Jørgensen wrote:
> > The example in patch 2 is functional, but not a lot of effort
> > has been made on performance optimisation. I did a simple test(pkt size 64)
> > with pktgen. Here is the test result with BPF_MAP_TYPE_DEVMAP_HASH
> > arrays:
> >
> > bpf_redirect_map() with 1 ingress, 1 egress:
> > generic path: ~1600k pps
> > native path: ~980k pps
> >
> > bpf_redirect_map_multi() with 1 ingress, 3 egress:
> > generic path: ~600k pps
> > native path: ~480k pps
> >
> > bpf_redirect_map_multi() with 1 ingress, 9 egress:
> > generic path: ~125k pps
> > native path: ~100k pps
> >
> > The bpf_redirect_map_multi() is slower than bpf_redirect_map() as we loop
> > the arrays and do clone skb/xdpf. The native path is slower than generic
> > path as we send skbs by pktgen. So the result looks reasonable.
>
> How are you running these tests? Still on virtual devices? We really
> need results from a physical setup in native mode to assess the impact
> on the native-XDP fast path. The numbers above don't tell much in this
> regard. I'd also like to see a before/after patch for straight
> bpf_redirect_map(), since you're messing with the fast path, and we want
> to make sure it's not causing a performance regression for regular
> redirect.
>
> Finally, since the overhead seems to be quite substantial: A comparison
> with a regular network stack bridge might make sense? After all we also
> want to make sure it's a performance win over that :)
Hi Toke,
Here is the result I tested with 2 i40e 10G ports on physical machine.
The pktgen pkt_size is 64.
Bridge forwarding(I use sample/bpf/xdp1 to count the PPS, so there are two modes data):
generic mode: 1.32M PPS
driver mode: 1.66M PPS
xdp_redirect_map:
generic mode: 1.88M PPS
driver mode: 2.74M PPS
xdp_redirect_map_multi:
generic mode: 1.38M PPS
driver mode: 2.73M PPS
So what do you think about the data. If you are OK, I will update
my patch and re-post it.
Thanks
Hangbin
Powered by blists - more mailing lists