[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87bllzj9bw.fsf@toke.dk>
Date: Thu, 04 Jun 2020 14:37:23 +0200
From: Toke Høiland-Jørgensen <toke@...hat.com>
To: Hangbin Liu <liuhangbin@...il.com>
Cc: bpf@...r.kernel.org, netdev@...r.kernel.org,
Jiri Benc <jbenc@...hat.com>,
Jesper Dangaard Brouer <brouer@...hat.com>,
Eelco Chaudron <echaudro@...hat.com>, ast@...nel.org,
Daniel Borkmann <daniel@...earbox.net>,
Lorenzo Bianconi <lorenzo.bianconi@...hat.com>
Subject: Re: [PATCHv4 bpf-next 0/2] xdp: add dev map multicast support
Hangbin Liu <liuhangbin@...il.com> writes:
> On Thu, Jun 04, 2020 at 11:44:24AM +0200, Toke Høiland-Jørgensen wrote:
>> Hangbin Liu <liuhangbin@...il.com> writes:
>> > Here is the test topology, which looks like
>> >
>> > Host A | Host B | Host C
>> > eth0 + eth0 - eth1 + eth0
>> >
>> > I did pktgen sending on Host A, forwarding on Host B.
>> > Host B is a Dell PowerEdge R730 (128G memory, Intel(R) Xeon(R) CPU E5-2690 v3)
>> > eth0, eth1 is an onboard i40e 10G driver
>> >
>> > Test 1: add eth0, eth1 to br0 and test bridge forwarding
>> > Test 2: Test xdp_redirect_map(), eth0 is ingress, eth1 is egress
>> > Test 3: Test xdp_redirect_map_multi(), eth0 is ingress, eth1 is egress
>>
>> Right, that all seems reasonable, but that machine is comparable to
>> my test machine, so you should be getting way more than 2.75 MPPS on a
>> regular redirect test. Are you bottlenecked on pktgen or something?
>
> Yes, I found the pktgen is bottleneck. I only use 1 thread.
> By using the cmd you gave to me
> ./pktgen_sample03_burst_single_flow.sh -i eno1 -d 192.168.200.1 -m f8:bc:12:14:11:20 -t 4 -s 64
>
> Now I could get higher speed.
>
>>
>> Could you please try running Jesper's ethtool stats poller:
>> https://github.com/netoptimizer/network-testing/blob/master/bin/ethtool_stats.pl
>
> Nice tool.
>
>> > I though you want me also test with bridge forwarding. Am I missing something?
>>
>> Yes, but what does this mean:
>> > (I use sample/bpf/xdp1 to count the PPS, so there are two modes data):
>>
>> or rather, why are there two numbers? :)
>
> Just as it said, to test bridge forwarding speed. I use the xdp tool
> sample/bpf/xdp1 to count the PPS. But there are two modes when attach xdp
> to eth0, general and driver mode. So there are 2 number..
>
> Now I use the ethtool_stats.pl to count forwarding speed and here is the result:
>
> With kernel 5.7(ingress i40e, egress i40e)
> XDP:
> bridge: 1.8M PPS
> xdp_redirect_map:
> generic mode: 1.9M PPS
> driver mode: 10.4M PPS
Ah, now we're getting somewhere! :)
> Kernel 5.7 + my patch(ingress i40e, egress i40e)
> bridge: 1.8M
> xdp_redirect_map:
> generic mode: 1.86M PPS
> driver mode: 10.17M PPS
Right, so this corresponds to a ~2ns overhead (10**9/10400000 -
10**9/10170000). This is not too far from being in the noise, I suppose;
is the difference consistent?
> xdp_redirect_map_multi:
> generic mode: 1.53M PPS
> driver mode: 7.22M PPS
>
> Kernel 5.7 + my patch(ingress i40e, egress veth)
> xdp_redirect_map:
> generic mode: 1.38M PPS
> driver mode: 4.15M PPS
> xdp_redirect_map_multi:
> generic mode: 1.13M PPS
> driver mode: 3.55M PPS
>
> Kernel 5.7 + my patch(ingress i40e, egress i40e + veth)
> xdp_redirect_map_multi:
> generic mode: 1.13M PPS
> driver mode: 3.47M PPS
>
> I added a group that with i40e ingress and veth egress, which shows
> a significant drop on the speed. It looks like veth driver is a bottleneck,
> but I don't have more i40e NICs on the test bed...
I suspect this may be because veth ends up creating an SKB for each
packet after receiving the frame on the peer device (even though it's
immediately dropped). Could you please try adding an XDP program that
drops the packets on the veth peer of your target, and see if that
helps?
-Toke
Powered by blists - more mailing lists