lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 9 Jun 2020 11:03:44 +0800
From:   Hangbin Liu <liuhangbin@...il.com>
To:     Toke Høiland-Jørgensen <toke@...hat.com>
Cc:     bpf@...r.kernel.org, netdev@...r.kernel.org,
        Jiri Benc <jbenc@...hat.com>,
        Jesper Dangaard Brouer <brouer@...hat.com>,
        Eelco Chaudron <echaudro@...hat.com>, ast@...nel.org,
        Daniel Borkmann <daniel@...earbox.net>,
        Lorenzo Bianconi <lorenzo.bianconi@...hat.com>
Subject: Re: [PATCHv4 bpf-next 0/2] xdp: add dev map multicast support

On Mon, Jun 08, 2020 at 05:32:54PM +0200, Toke Høiland-Jørgensen wrote:
> Hangbin Liu <liuhangbin@...il.com> writes:
> 
> > On Thu, Jun 04, 2020 at 06:02:54PM +0200, Toke Høiland-Jørgensen wrote:
> >> Hangbin Liu <liuhangbin@...il.com> writes:
> >> 
> >> > On Thu, Jun 04, 2020 at 02:37:23PM +0200, Toke Høiland-Jørgensen wrote:
> >> >> > Now I use the ethtool_stats.pl to count forwarding speed and here is the result:
> >> >> >
> >> >> > With kernel 5.7(ingress i40e, egress i40e)
> >> >> > XDP:
> >> >> > bridge: 1.8M PPS
> >> >> > xdp_redirect_map:
> >> >> >   generic mode: 1.9M PPS
> >> >> >   driver mode: 10.4M PPS
> >> >> 
> >> >> Ah, now we're getting somewhere! :)
> >> >> 
> >> >> > Kernel 5.7 + my patch(ingress i40e, egress i40e)
> >> >> > bridge: 1.8M
> >> >> > xdp_redirect_map:
> >> >> >   generic mode: 1.86M PPS
> >> >> >   driver mode: 10.17M PPS
> >> >> 
> >> >> Right, so this corresponds to a ~2ns overhead (10**9/10400000 -
> >> >> 10**9/10170000). This is not too far from being in the noise, I suppose;
> >> >> is the difference consistent?
> >> >
> >> > Sorry, I didn't get, what different consistent do you mean?
> >> 
> >> I meant, how much do the numbers vary between each test run?
> >
> > Oh, when run it at the same period, the number is stable, the range is about
> > ~0.05M PPS. But after a long time or reboot, the speed may changed a little.
> > Here is the new test result after I reboot the system:
> >
> > Kernel 5.7 + my patch(ingress i40e, egress i40e)
> > xdp_redirect_map:
> >   generic mode: 1.9M PPS
> >   driver mode: 10.2M PPS
> >
> > xdp_redirect_map_multi:
> >   generic mode: 1.58M PPS
> >   driver mode: 7.16M PPS
> >
> > Kernel 5.7 + my patch(ingress i40e, egress i40e + veth(No XDP on peer))
> > xdp_redirect_map:
> >   generic mode: 2.2M PPS
> >   driver mode: 14.2M PPS
> 
> This looks wrong - why is performance increasing when adding another
> target? How are you even adding another target to regular
> xdp_redirect_map?
> 
Oh, sorry for the typo, the numbers make me crazy, it should be only
ingress i40e, egress veth. Here is the right description:

Kernel 5.7 + my patch(ingress i40e, egress i40e)
xdp_redirect_map:
  generic mode: 1.9M PPS
  driver mode: 10.2M PPS

xdp_redirect_map_multi:
  generic mode: 1.58M PPS
  driver mode: 7.16M PPS

Kernel 5.7 + my patch(ingress i40e, egress veth(No XDP on peer))
xdp_redirect_map:
  generic mode: 2.2M PPS
  driver mode: 14.2M PPS

xdp_redirect_map_multi:
  generic mode: 1.6M PPS
  driver mode: 9.9M PPS

Kernel 5.7 + my patch(ingress i40e, egress veth(with XDP_DROP on peer))
xdp_redirect_map:
  generic mode: 1.6M PPS
  driver mode: 13.6M PPS

xdp_redirect_map_multi:
  generic mode: 1.3M PPS
  driver mode: 8.7M PPS

Kernel 5.7 + my patch(ingress i40e, egress i40e + veth(No XDP on peer))
xdp_redirect_map_multi:
  generic mode: 1.15M PPS
  driver mode: 3.48M PPS

Kernel 5.7 + my patch(ingress i40e, egress i40e + veth(with XDP_DROP on peer))
xdp_redirect_map_multi:
  generic mode: 0.98M PPS
  driver mode: 3.15M PPS

The performance number for xdp_redirect_map_multi is not very well.
But I think we can optimize after the implementation.

Thanks
Hangbin

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ