lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <2474fa1b-c69f-22e8-3358-937457667262@ucloud.cn>
Date:   Thu, 5 Dec 2019 11:41:35 +0800
From:   wenxu <wenxu@...oud.cn>
To:     Roi Dayan <roid@...lanox.com>
Cc:     "netdev@...r.kernel.org" <netdev@...r.kernel.org>,
        Saeed Mahameed <saeedm@...lanox.com>
Subject: Re: Bad performance for VF outgoing in offloaded mode


On 12/4/2019 9:50 PM, Roi Dayan wrote:
>
> On 2019-11-28 7:03 AM, wenxu wrote:
>> Hi mellanox team,
>>
>>
>> I did a performance test for tc offload with upstream kernel:
>>
>> I setup a vm with a VF as eth0
>>
>> In the vm:
>>
>> ifconfig eth0 10.0.0.75/24 up
>>
>>
>> On the host the mlx_p0 is the pf representor and mlx_pf0vf0 is the vf representor
>>
>> The device in the switchdev mode
>>
>> # grep -ri "" /sys/class/net/*/phys_* 2>/dev/null
>> /sys/class/net/mlx_p0/phys_port_name:p0
>> /sys/class/net/mlx_p0/phys_switch_id:34ebc100034b6b50
>> /sys/class/net/mlx_pf0vf0/phys_port_name:pf0vf0
>> /sys/class/net/mlx_pf0vf0/phys_switch_id:34ebc100034b6b50
>> /sys/class/net/mlx_pf0vf1/phys_port_name:pf0vf1
>> /sys/class/net/mlx_pf0vf1/phys_switch_id:34ebc100034b6b50
>>
>>
>> The tc filter as following: just forward ip/arp packets  in mlx_p0 and mlx_pf0vf0 each other
>>
>> tc qdisc add dev mlx_p0 ingress
>> tc qdisc add dev mlx_pf0vf0 ingress
>>
>> tc filter add dev mlx_pf0vf0 pref 2 ingress  protocol ip flower skip_sw action mirred egress redirect dev mlx_p0
>> tc filter add dev mlx_p0 pref 2 ingress  protocol ip flower skip_sw action mirred egress redirect dev mlx_pf0vf0
>>
>> tc filter add dev mlx_pf0vf0 pref 1 ingress  protocol arp flower skip_sw action mirred egress redirect dev mlx_p0
>> tc filter add dev mlx_p0 pref 1 ingress  protocol arp flower skip_sw action mirred egress redirect dev mlx_pf0vf0
>>
>>
>> The remote server device eth0:
>>
>> ifconfig eth0 10.0.0.241/24
>>
>>
>> test case 1:   tcp recieve from VF to PF
>>
>> In the vm: iperf -s
>>
>> On the remote server:
>>
>> iperf -c 10.0.0.75 -t 10 -i 2
>> ------------------------------------------------------------
>> Client connecting to 10.0.0.75, TCP port 5001
>> TCP window size: 85.0 KByte (default)
>> ------------------------------------------------------------
>> [  3] local 10.0.0.241 port 59708 connected with 10.0.0.75 port 5001
>> [ ID] Interval       Transfer     Bandwidth
>> [  3]  0.0- 2.0 sec  5.40 GBytes  23.2 Gbits/sec
>> [  3]  2.0- 4.0 sec  5.35 GBytes  23.0 Gbits/sec
>> [  3]  4.0- 6.0 sec  5.46 GBytes  23.5 Gbits/sec
>> [  3]  6.0- 8.0 sec  5.10 GBytes  21.9 Gbits/sec
>> [  3]  8.0-10.0 sec  5.36 GBytes  23.0 Gbits/sec
>> [  3]  0.0-10.0 sec  26.7 GBytes  22.9 Gbits/sec
>>
>>
>> Good performance with offload.
>>
>> # tc -s filter ls dev mlx_p0 ingress
>> filter protocol arp pref 1 flower chain 0
>> filter protocol arp pref 1 flower chain 0 handle 0x1
>>   eth_type arp
>>   skip_sw
>>   in_hw in_hw_count 1
>>     action order 1: mirred (Egress Redirect to device mlx_pf0vf0) stolen
>>      index 4 ref 1 bind 1 installed 971 sec used 82 sec
>>      Action statistics:
>>     Sent 420 bytes 7 pkt (dropped 0, overlimits 0 requeues 0)
>>     Sent software 0 bytes 0 pkt
>>     Sent hardware 420 bytes 7 pkt
>>     backlog 0b 0p requeues 0
>>
>> filter protocol ip pref 2 flower chain 0
>> filter protocol ip pref 2 flower chain 0 handle 0x1
>>   eth_type ipv4
>>   skip_sw
>>   in_hw in_hw_count 1
>>     action order 1: mirred (Egress Redirect to device mlx_pf0vf0) stolen
>>      index 2 ref 1 bind 1 installed 972 sec used 67 sec
>>      Action statistics:
>>     Sent 79272204362 bytes 91511261 pkt (dropped 0, overlimits 0 requeues 0)
>>     Sent software 0 bytes 0 pkt
>>     Sent hardware 79272204362 bytes 91511261 pkt
>>     backlog 0b 0p requeues 0
>>
>> #  tc -s filter ls dev mlx_pf0vf0 ingress
>> filter protocol arp pref 1 flower chain 0
>> filter protocol arp pref 1 flower chain 0 handle 0x1
>>   eth_type arp
>>   skip_sw
>>   in_hw in_hw_count 1
>>     action order 1: mirred (Egress Redirect to device mlx_p0) stolen
>>      index 3 ref 1 bind 1 installed 978 sec used 88 sec
>>      Action statistics:
>>     Sent 600 bytes 10 pkt (dropped 0, overlimits 0 requeues 0)
>>     Sent software 0 bytes 0 pkt
>>     Sent hardware 600 bytes 10 pkt
>>     backlog 0b 0p requeues 0
>>
>> filter protocol ip pref 2 flower chain 0
>> filter protocol ip pref 2 flower chain 0 handle 0x1
>>   eth_type ipv4
>>   skip_sw
>>   in_hw in_hw_count 1
>>     action order 1: mirred (Egress Redirect to device mlx_p0) stolen
>>      index 1 ref 1 bind 1 installed 978 sec used 73 sec
>>      Action statistics:
>>     Sent 71556027574 bytes 47805525 pkt (dropped 0, overlimits 0 requeues 0)
>>     Sent software 0 bytes 0 pkt
>>     Sent hardware 71556027574 bytes 47805525 pkt
>>     backlog 0b 0p requeues 0
>>
>>
>>
>> test case 2:  tcp send from VF to PF
>>
>> On the reomte server: iperf -s
>>
>> in the vm:
>>
>> # iperf -c 10.0.0.241 -t 10 -i 2
>>
>> ------------------------------------------------------------
>> Client connecting to 10.0.0.241, TCP port 5001
>> TCP window size:  230 KByte (default)
>> ------------------------------------------------------------
>> [  3] local 10.0.0.75 port 53166 connected with 10.0.0.241 port 5001
>> [ ID] Interval       Transfer     Bandwidth
>> [  3]  0.0- 2.0 sec   939 MBytes  3.94 Gbits/sec
>> [  3]  2.0- 4.0 sec   944 MBytes  3.96 Gbits/sec
>> [  3]  4.0- 6.0 sec  1.01 GBytes  4.34 Gbits/sec
>> [  3]  6.0- 8.0 sec  1.03 GBytes  4.44 Gbits/sec
>> [  3]  8.0-10.0 sec  1.02 GBytes  4.39 Gbits/sec
>> [  3]  0.0-10.0 sec  4.90 GBytes  4.21 Gbits/sec
>>
>>
>> Bad performance with offload.  All the packet are offloaded. 
>>
>> It is the offload problem in the hardware?
>>
>>
>> BR
>>
>> wenxu
>>
>>
> Hi Wenxu,
>
> We didn't notice this behavior.
> Could it be your VM doesn't have enough resources to generate the traffic?
> As a listener it's only sending the acks.

Sorry, I found it is the problem of remote 10.0.0.241 server.

>
> Thanks,
> Roi

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ