[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200604121212.GM102436@dhcp-12-153.nay.redhat.com>
Date: Thu, 4 Jun 2020 20:12:12 +0800
From: Hangbin Liu <liuhangbin@...il.com>
To: Toke Høiland-Jørgensen <toke@...hat.com>
Cc: bpf@...r.kernel.org, netdev@...r.kernel.org,
Jiri Benc <jbenc@...hat.com>,
Jesper Dangaard Brouer <brouer@...hat.com>,
Eelco Chaudron <echaudro@...hat.com>, ast@...nel.org,
Daniel Borkmann <daniel@...earbox.net>,
Lorenzo Bianconi <lorenzo.bianconi@...hat.com>
Subject: Re: [PATCHv4 bpf-next 0/2] xdp: add dev map multicast support
On Thu, Jun 04, 2020 at 11:44:24AM +0200, Toke Høiland-Jørgensen wrote:
> Hangbin Liu <liuhangbin@...il.com> writes:
> > Here is the test topology, which looks like
> >
> > Host A | Host B | Host C
> > eth0 + eth0 - eth1 + eth0
> >
> > I did pktgen sending on Host A, forwarding on Host B.
> > Host B is a Dell PowerEdge R730 (128G memory, Intel(R) Xeon(R) CPU E5-2690 v3)
> > eth0, eth1 is an onboard i40e 10G driver
> >
> > Test 1: add eth0, eth1 to br0 and test bridge forwarding
> > Test 2: Test xdp_redirect_map(), eth0 is ingress, eth1 is egress
> > Test 3: Test xdp_redirect_map_multi(), eth0 is ingress, eth1 is egress
>
> Right, that all seems reasonable, but that machine is comparable to
> my test machine, so you should be getting way more than 2.75 MPPS on a
> regular redirect test. Are you bottlenecked on pktgen or something?
Yes, I found the pktgen is bottleneck. I only use 1 thread.
By using the cmd you gave to me
./pktgen_sample03_burst_single_flow.sh -i eno1 -d 192.168.200.1 -m f8:bc:12:14:11:20 -t 4 -s 64
Now I could get higher speed.
>
> Could you please try running Jesper's ethtool stats poller:
> https://github.com/netoptimizer/network-testing/blob/master/bin/ethtool_stats.pl
Nice tool.
> > I though you want me also test with bridge forwarding. Am I missing something?
>
> Yes, but what does this mean:
> > (I use sample/bpf/xdp1 to count the PPS, so there are two modes data):
>
> or rather, why are there two numbers? :)
Just as it said, to test bridge forwarding speed. I use the xdp tool
sample/bpf/xdp1 to count the PPS. But there are two modes when attach xdp
to eth0, general and driver mode. So there are 2 number..
Now I use the ethtool_stats.pl to count forwarding speed and here is the result:
With kernel 5.7(ingress i40e, egress i40e)
XDP:
bridge: 1.8M PPS
xdp_redirect_map:
generic mode: 1.9M PPS
driver mode: 10.4M PPS
Kernel 5.7 + my patch(ingress i40e, egress i40e)
bridge: 1.8M
xdp_redirect_map:
generic mode: 1.86M PPS
driver mode: 10.17M PPS
xdp_redirect_map_multi:
generic mode: 1.53M PPS
driver mode: 7.22M PPS
Kernel 5.7 + my patch(ingress i40e, egress veth)
xdp_redirect_map:
generic mode: 1.38M PPS
driver mode: 4.15M PPS
xdp_redirect_map_multi:
generic mode: 1.13M PPS
driver mode: 3.55M PPS
Kernel 5.7 + my patch(ingress i40e, egress i40e + veth)
xdp_redirect_map_multi:
generic mode: 1.13M PPS
driver mode: 3.47M PPS
I added a group that with i40e ingress and veth egress, which shows
a significant drop on the speed. It looks like veth driver is a bottleneck,
but I don't have more i40e NICs on the test bed...
Thanks
Hangbin
Powered by blists - more mailing lists