lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <871rmvkvwn.fsf@toke.dk>
Date:   Thu, 04 Jun 2020 11:44:24 +0200
From:   Toke Høiland-Jørgensen <toke@...hat.com>
To:     Hangbin Liu <liuhangbin@...il.com>
Cc:     bpf@...r.kernel.org, netdev@...r.kernel.org,
        Jiri Benc <jbenc@...hat.com>,
        Jesper Dangaard Brouer <brouer@...hat.com>,
        Eelco Chaudron <echaudro@...hat.com>, ast@...nel.org,
        Daniel Borkmann <daniel@...earbox.net>,
        Lorenzo Bianconi <lorenzo.bianconi@...hat.com>
Subject: Re: [PATCHv4 bpf-next 0/2] xdp: add dev map multicast support

Hangbin Liu <liuhangbin@...il.com> writes:

> On Wed, Jun 03, 2020 at 01:05:28PM +0200, Toke Høiland-Jørgensen wrote:
>> > Hi Toke,
>> >
>> > Here is the result I tested with 2 i40e 10G ports on physical machine.
>> > The pktgen pkt_size is 64.
>> 
>> These numbers seem a bit low (I'm getting ~8.5MPPS on my test machine
>> for a simple redirect). Some of that may just be performance of the
>> machine, I guess (what are you running this on?), but please check that
>> you are not limited by pktgen itself - i.e., that pktgen is generating
>> traffic at a higher rate than what XDP is processing.
>
> Here is the test topology, which looks like
>
>  Host A    |     Host B        |        Host C
>  eth0      +    eth0 - eth1    +        eth0
>
> I did pktgen sending on Host A, forwarding on Host B.
> Host B is a Dell PowerEdge R730 (128G memory, Intel(R) Xeon(R) CPU E5-2690 v3)
> eth0, eth1 is an onboard i40e 10G driver
>
> Test 1: add eth0, eth1 to br0 and test bridge forwarding
> Test 2: Test xdp_redirect_map(), eth0 is ingress, eth1 is egress
> Test 3: Test xdp_redirect_map_multi(), eth0 is ingress, eth1 is egress

Right, that all seems reasonable, but that machine is comparable to
my test machine, so you should be getting way more than 2.75 MPPS on a
regular redirect test. Are you bottlenecked on pktgen or something?

Could you please try running Jesper's ethtool stats poller:
https://github.com/netoptimizer/network-testing/blob/master/bin/ethtool_stats.pl

on eth0 on Host B, and see what PPS values you get on the different counters?

>> > Bridge forwarding(I use sample/bpf/xdp1 to count the PPS, so there are two modes data):
>> > generic mode: 1.32M PPS
>> > driver mode: 1.66M PPS
>> 
>> I'm not sure I understand this - what are you measuring here exactly?
>
>> Finally, since the overhead seems to be quite substantial: A comparison
>> with a regular network stack bridge might make sense? After all we also
>> want to make sure it's a performance win over that :)
>
> I though you want me also test with bridge forwarding. Am I missing something?

Yes, but what does this mean:
> (I use sample/bpf/xdp1 to count the PPS, so there are two modes data):

or rather, why are there two numbers? :)

>> > xdp_redirect_map:
>> > generic mode: 1.88M PPS
>> > driver mode: 2.74M PPS
>> 
>> Please add numbers without your patch applied as well, for comparison.
>
> OK, I will.
>> 
>> > xdp_redirect_map_multi:
>> > generic mode: 1.38M PPS
>> > driver mode: 2.73M PPS
>> 
>> I assume this is with a single interface only, right? Could you please
>> add a test with a second interface (so the packet is cloned) as well?
>> You can just use a veth as the second target device.
>
> OK, so the topology on Host B should be like
>
> eth0 + eth1 + veth0, eth0 as ingress, eth1 and veth0 as egress, right?

Yup, exactly!

-Toke

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ