lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 8 Nov 2018 14:33:54 +0100
From:   Paweł Staszewski <pstaszewski@...are.pl>
To:     David Ahern <dsahern@...il.com>,
        Jesper Dangaard Brouer <brouer@...hat.com>
Cc:     netdev <netdev@...r.kernel.org>, Yoel Caspersen <yoel@...knet.dk>
Subject: Re: Kernel 4.19 network performance - forwarding/routing normal users
 traffic



W dniu 07.11.2018 o 22:06, David Ahern pisze:
> On 11/3/18 6:24 PM, Paweł Staszewski wrote:
>>> Does your setup have any other device types besides physical ports with
>>> VLANs (e.g., any macvlans or bonds)?
>>>
>>>
>> no.
>> just
>> phy(mlnx)->vlans only config
> VLAN and non-VLAN (and a mix) seem to work ok. Patches are here:
>     https://github.com/dsahern/linux.git bpf/kernel-tables-wip
>
> I got lazy with the vlan exports; right now it requires 8021q to be
> builtin (CONFIG_VLAN_8021Q=y)
>
> You can use the xdp_fwd sample:
>    make O=kbuild -C samples/bpf -j 8
>
> Copy samples/bpf/xdp_fwd_kern.o and samples/bpf/xdp_fwd to the server
> and run:
>     ./xdp_fwd <list of NIC ports>
>
> e.g., in my testing I run:
>     xdp_fwd eth1 eth2 eth3 eth4
>
> All of the relevant forwarding ports need to be on the same command
> line. This version populates a second map to verify the egress port has
> XDP enabled.
Installed today on some lab server with mellanox connectx4

And trying some simple static routing first - but after enabling xdp 
program - receiver is not receiving frames

Route table is simple as possible for tests :)

icmp ping test send from 192.168.22.237 to 172.16.0.2 - incomming 
packets on vlan 4081

ip r
default via 192.168.22.236 dev vlan4081
172.16.0.0/30 dev vlan1740 proto kernel scope link src 172.16.0.1
192.168.22.0/24 dev vlan4081 proto kernel scope link src 192.168.22.205

neigh table:
ip neigh ls

192.168.22.237 dev vlan4081 lladdr 00:25:90:fb:a6:8d REACHABLE
172.16.0.2 dev vlan1740 lladdr ac:1f:6b:2c:2e:5a REACHABLE

and interfaces:
4: enp175s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state 
UP mode DEFAULT group default qlen 1000
     link/ether ac:1f:6b:07:c8:90 brd ff:ff:ff:ff:ff:ff
5: enp175s0f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state 
UP mode DEFAULT group default qlen 1000
     link/ether ac:1f:6b:07:c8:91 brd ff:ff:ff:ff:ff:ff
6: vlan4081@...175s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc 
noqueue state UP mode DEFAULT group default qlen 1000
     link/ether ac:1f:6b:07:c8:90 brd ff:ff:ff:ff:ff:ff
7: vlan1740@...175s0f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc 
noqueue state UP mode DEFAULT group default qlen 1000
     link/ether ac:1f:6b:07:c8:91 brd ff:ff:ff:ff:ff:ff

5: enp175s0f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 xdp/id:5 qdisc 
mq state UP group default qlen 1000
     link/ether ac:1f:6b:07:c8:91 brd ff:ff:ff:ff:ff:ff
     inet6 fe80::ae1f:6bff:fe07:c891/64 scope link
        valid_lft forever preferred_lft forever
6: vlan4081@...175s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc 
noqueue state UP group default qlen 1000
     link/ether ac:1f:6b:07:c8:90 brd ff:ff:ff:ff:ff:ff
     inet 192.168.22.205/24 scope global vlan4081
        valid_lft forever preferred_lft forever
     inet6 fe80::ae1f:6bff:fe07:c890/64 scope link
        valid_lft forever preferred_lft forever
7: vlan1740@...175s0f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc 
noqueue state UP group default qlen 1000
     link/ether ac:1f:6b:07:c8:91 brd ff:ff:ff:ff:ff:ff
     inet 172.16.0.1/30 scope global vlan1740
        valid_lft forever preferred_lft forever
     inet6 fe80::ae1f:6bff:fe07:c891/64 scope link
        valid_lft forever preferred_lft forever


xdp program detached:
Receiving side tcpdump:
14:28:09.141233 IP 192.168.22.237 > 172.16.0.2: ICMP echo request, id 
30227, seq 487, length 64

I can see icmp requests


enabling xdp
./xdp_fwd enp175s0f1 enp175s0f0

4: enp175s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 xdp qdisc mq 
state UP mode DEFAULT group default qlen 1000
     link/ether ac:1f:6b:07:c8:90 brd ff:ff:ff:ff:ff:ff
     prog/xdp id 5 tag 3c231ff1e5e77f3f
5: enp175s0f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 xdp qdisc mq 
state UP mode DEFAULT group default qlen 1000
     link/ether ac:1f:6b:07:c8:91 brd ff:ff:ff:ff:ff:ff
     prog/xdp id 5 tag 3c231ff1e5e77f3f
6: vlan4081@...175s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc 
noqueue state UP mode DEFAULT group default qlen 1000
     link/ether ac:1f:6b:07:c8:90 brd ff:ff:ff:ff:ff:ff
7: vlan1740@...175s0f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc 
noqueue state UP mode DEFAULT group default qlen 1000
     link/ether ac:1f:6b:07:c8:91 brd ff:ff:ff:ff:ff:ff



Receiving side no icmp echo requests incommint to interface.

And some ethtool stats for xdp interface trat receiving icmp requests 
from sender to be forwarded:
ethtool -S enp175s0f0 | grep 'rx_xdp_redirect'
      rx_xdp_redirect: 321

ethtool stats for interface that should forward icmp requests to 
receiver on vlan id 1740

ethtool -S enp175s0f1 | grep 'tx_xdp'
      tx_xdp_xmit: 0
      tx_xdp_full: 0
      tx_xdp_err: 0
      tx_xdp_cqes: 0


No frames tx-ed.





>
>> And today again after allpy patch for page allocator - reached again
>> 64/64 Gbit/s
>>
>> with only 50-60% cpu load
> you should see the cpu load drop considerably.
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ