lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <77027868-2b96-8c1d-f485-c7b36c6d9fa9@itcare.pl>
Date:   Fri, 9 Nov 2018 11:20:43 +0100
From:   Paweł Staszewski <pstaszewski@...are.pl>
To:     David Ahern <dsahern@...il.com>,
        Jesper Dangaard Brouer <brouer@...hat.com>
Cc:     netdev <netdev@...r.kernel.org>, Yoel Caspersen <yoel@...knet.dk>
Subject: Re: Kernel 4.19 network performance - forwarding/routing normal users
 traffic



W dniu 08.11.2018 o 17:06, David Ahern pisze:
> On 11/8/18 6:33 AM, Paweł Staszewski wrote:
>>
>> W dniu 07.11.2018 o 22:06, David Ahern pisze:
>>> On 11/3/18 6:24 PM, Paweł Staszewski wrote:
>>>>> Does your setup have any other device types besides physical ports with
>>>>> VLANs (e.g., any macvlans or bonds)?
>>>>>
>>>>>
>>>> no.
>>>> just
>>>> phy(mlnx)->vlans only config
>>> VLAN and non-VLAN (and a mix) seem to work ok. Patches are here:
>>>      https://github.com/dsahern/linux.git bpf/kernel-tables-wip
>>>
>>> I got lazy with the vlan exports; right now it requires 8021q to be
>>> builtin (CONFIG_VLAN_8021Q=y)
>>>
>>> You can use the xdp_fwd sample:
>>>     make O=kbuild -C samples/bpf -j 8
>>>
>>> Copy samples/bpf/xdp_fwd_kern.o and samples/bpf/xdp_fwd to the server
>>> and run:
>>>      ./xdp_fwd <list of NIC ports>
>>>
>>> e.g., in my testing I run:
>>>      xdp_fwd eth1 eth2 eth3 eth4
>>>
>>> All of the relevant forwarding ports need to be on the same command
>>> line. This version populates a second map to verify the egress port has
>>> XDP enabled.
>> Installed today on some lab server with mellanox connectx4
>>
>> And trying some simple static routing first - but after enabling xdp
>> program - receiver is not receiving frames
>>
>> Route table is simple as possible for tests :)
>>
>> icmp ping test send from 192.168.22.237 to 172.16.0.2 - incomming
>> packets on vlan 4081
>>
>> ip r
>> default via 192.168.22.236 dev vlan4081
>> 172.16.0.0/30 dev vlan1740 proto kernel scope link src 172.16.0.1
>> 192.168.22.0/24 dev vlan4081 proto kernel scope link src 192.168.22.205
>>
>> neigh table:
>> ip neigh ls
>>
>> 192.168.22.237 dev vlan4081 lladdr 00:25:90:fb:a6:8d REACHABLE
>> 172.16.0.2 dev vlan1740 lladdr ac:1f:6b:2c:2e:5a REACHABLE
>>
>> and interfaces:
>> 4: enp175s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state
>> UP mode DEFAULT group default qlen 1000
>>      link/ether ac:1f:6b:07:c8:90 brd ff:ff:ff:ff:ff:ff
>> 5: enp175s0f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state
>> UP mode DEFAULT group default qlen 1000
>>      link/ether ac:1f:6b:07:c8:91 brd ff:ff:ff:ff:ff:ff
>> 6: vlan4081@...175s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc
>> noqueue state UP mode DEFAULT group default qlen 1000
>>      link/ether ac:1f:6b:07:c8:90 brd ff:ff:ff:ff:ff:ff
>> 7: vlan1740@...175s0f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc
>> noqueue state UP mode DEFAULT group default qlen 1000
>>      link/ether ac:1f:6b:07:c8:91 brd ff:ff:ff:ff:ff:ff
>>
>> 5: enp175s0f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 xdp/id:5 qdisc
>> mq state UP group default qlen 1000
>>      link/ether ac:1f:6b:07:c8:91 brd ff:ff:ff:ff:ff:ff
>>      inet6 fe80::ae1f:6bff:fe07:c891/64 scope link
>>         valid_lft forever preferred_lft forever
>> 6: vlan4081@...175s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc
>> noqueue state UP group default qlen 1000
>>      link/ether ac:1f:6b:07:c8:90 brd ff:ff:ff:ff:ff:ff
>>      inet 192.168.22.205/24 scope global vlan4081
>>         valid_lft forever preferred_lft forever
>>      inet6 fe80::ae1f:6bff:fe07:c890/64 scope link
>>         valid_lft forever preferred_lft forever
>> 7: vlan1740@...175s0f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc
>> noqueue state UP group default qlen 1000
>>      link/ether ac:1f:6b:07:c8:91 brd ff:ff:ff:ff:ff:ff
>>      inet 172.16.0.1/30 scope global vlan1740
>>         valid_lft forever preferred_lft forever
>>      inet6 fe80::ae1f:6bff:fe07:c891/64 scope link
>>         valid_lft forever preferred_lft forever
>>
>>
>> xdp program detached:
>> Receiving side tcpdump:
>> 14:28:09.141233 IP 192.168.22.237 > 172.16.0.2: ICMP echo request, id
>> 30227, seq 487, length 64
>>
>> I can see icmp requests
>>
>>
>> enabling xdp
>> ./xdp_fwd enp175s0f1 enp175s0f0
>>
>> 4: enp175s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 xdp qdisc mq
>> state UP mode DEFAULT group default qlen 1000
>>      link/ether ac:1f:6b:07:c8:90 brd ff:ff:ff:ff:ff:ff
>>      prog/xdp id 5 tag 3c231ff1e5e77f3f
>> 5: enp175s0f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 xdp qdisc mq
>> state UP mode DEFAULT group default qlen 1000
>>      link/ether ac:1f:6b:07:c8:91 brd ff:ff:ff:ff:ff:ff
>>      prog/xdp id 5 tag 3c231ff1e5e77f3f
>> 6: vlan4081@...175s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc
>> noqueue state UP mode DEFAULT group default qlen 1000
>>      link/ether ac:1f:6b:07:c8:90 brd ff:ff:ff:ff:ff:ff
>> 7: vlan1740@...175s0f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc
>> noqueue state UP mode DEFAULT group default qlen 1000
>>      link/ether ac:1f:6b:07:c8:91 brd ff:ff:ff:ff:ff:ff
>>
> What hardware is this?
>
> Start with:
>
> echo 1 > /sys/kernel/debug/tracing/events/xdp/enable
> cat /sys/kernel/debug/tracing/trace_pipe
>
> >From there, you can check the FIB lookups:
> sysctl -w kernel.perf_event_max_stack=16
> perf record -e fib:* -a -g -- sleep 5
> perf script
>

I just catch some weird behavior :)
All was working fine for about 20k packets

Then after xdp start to forward every 10 packets
ping 172.16.0.2 -i 0.1
PING 172.16.0.2 (172.16.0.2) 56(84) bytes of data.
64 bytes from 172.16.0.2: icmp_seq=1 ttl=64 time=5.12 ms
64 bytes from 172.16.0.2: icmp_seq=9 ttl=64 time=5.20 ms
64 bytes from 172.16.0.2: icmp_seq=19 ttl=64 time=4.85 ms
64 bytes from 172.16.0.2: icmp_seq=29 ttl=64 time=4.91 ms
64 bytes from 172.16.0.2: icmp_seq=38 ttl=64 time=4.85 ms
64 bytes from 172.16.0.2: icmp_seq=48 ttl=64 time=5.00 ms
^C
--- 172.16.0.2 ping statistics ---
55 packets transmitted, 6 received, 89% packet loss, time 5655ms
rtt min/avg/max/mdev = 4.850/4.992/5.203/0.145 ms


And again after some time back to normal

  ping 172.16.0.2 -i 0.1
PING 172.16.0.2 (172.16.0.2) 56(84) bytes of data.
64 bytes from 172.16.0.2: icmp_seq=1 ttl=64 time=5.02 ms
64 bytes from 172.16.0.2: icmp_seq=2 ttl=64 time=5.06 ms
64 bytes from 172.16.0.2: icmp_seq=3 ttl=64 time=5.19 ms
64 bytes from 172.16.0.2: icmp_seq=4 ttl=64 time=5.07 ms
64 bytes from 172.16.0.2: icmp_seq=5 ttl=64 time=5.08 ms
64 bytes from 172.16.0.2: icmp_seq=6 ttl=64 time=5.14 ms
64 bytes from 172.16.0.2: icmp_seq=7 ttl=64 time=5.08 ms
64 bytes from 172.16.0.2: icmp_seq=8 ttl=64 time=5.17 ms
64 bytes from 172.16.0.2: icmp_seq=9 ttl=64 time=5.04 ms
64 bytes from 172.16.0.2: icmp_seq=10 ttl=64 time=5.10 ms
64 bytes from 172.16.0.2: icmp_seq=11 ttl=64 time=5.11 ms
64 bytes from 172.16.0.2: icmp_seq=12 ttl=64 time=5.13 ms
64 bytes from 172.16.0.2: icmp_seq=13 ttl=64 time=5.12 ms
64 bytes from 172.16.0.2: icmp_seq=14 ttl=64 time=5.15 ms
64 bytes from 172.16.0.2: icmp_seq=15 ttl=64 time=5.13 ms
64 bytes from 172.16.0.2: icmp_seq=16 ttl=64 time=5.04 ms
64 bytes from 172.16.0.2: icmp_seq=17 ttl=64 time=5.12 ms
64 bytes from 172.16.0.2: icmp_seq=18 ttl=64 time=5.07 ms
64 bytes from 172.16.0.2: icmp_seq=19 ttl=64 time=5.06 ms
64 bytes from 172.16.0.2: icmp_seq=20 ttl=64 time=5.12 ms
64 bytes from 172.16.0.2: icmp_seq=21 ttl=64 time=5.21 ms
64 bytes from 172.16.0.2: icmp_seq=22 ttl=64 time=4.98 ms
^C
--- 172.16.0.2 ping statistics ---
22 packets transmitted, 22 received, 0% packet loss, time 2105ms
rtt min/avg/max/mdev = 4.988/5.104/5.210/0.089 ms


I will try to catch this with debug enabled





Wondering also - cause xdp will bypass now vlan counters and other stuff 
like tcpdump

Is there possible to add only counters from xdp for vlans ?
This will help me in testing.


And also - for non lab scenario there should be possible to sniff 
sometimes on interface :)
Soo wondering if need to attack another xdp program to interface or all 
this can be done by one

I think this is time where i will need to learn more about xdp :)


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ