lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 20 May 2019 09:53:18 +0800
From:   wenxu <wenxu@...oud.cn>
To:     Roi Dayan <roid@...lanox.com>, Saeed Mahameed <saeedm@...lanox.com>
Cc:     "netdev@...r.kernel.org" <netdev@...r.kernel.org>
Subject: Bug or mis configuration for mlx5e lag and multipath

Hi Roi & Saeed,

I just test the mlx5e lag and mutipath feature. There are some suituation the outgoing can't be offloaded.

ovs configureation as following.

# ovs-vsctl show
dfd71dfb-6e22-423e-b088-d2022103af6b
    Bridge "br0"
        Port "mlx_pf0vf0"
            Interface "mlx_pf0vf0"
        Port gre
            Interface gre
                type: gre
                options: {key="1000", local_ip="172.168.152.75", remote_ip="172.168.152.241"}
        Port "br0"
            Interface "br0"
                type: internal

set the mlx5e driver:


modprobe mlx5_core
echo 0 > /sys/class/net/eth2/device/sriov_numvfs
echo 0 > /sys/class/net/eth3/device/sriov_numvfs
echo 2 > /sys/class/net/eth2/device/sriov_numvfs
echo 2 > /sys/class/net/eth3/device/sriov_numvfs
lspci -nn | grep Mellanox
echo 0000:81:00.2 > /sys/bus/pci/drivers/mlx5_core/unbind
echo 0000:81:00.3 > /sys/bus/pci/drivers/mlx5_core/unbind
echo 0000:81:03.6 > /sys/bus/pci/drivers/mlx5_core/unbind
echo 0000:81:03.7 > /sys/bus/pci/drivers/mlx5_core/unbind

devlink dev eswitch set pci/0000:81:00.0  mode switchdev encap enable
devlink dev eswitch set pci/0000:81:00.1  mode switchdev encap enable

modprobe bonding mode=802.3ad miimon=100 lacp_rate=1
ip l del dev bond0
ifconfig mlx_p0 down
ifconfig mlx_p1 down
ip l add dev bond0 type bond mode 802.3ad
ifconfig bond0 172.168.152.75/24 up
echo 1 > /sys/class/net/bond0/bonding/xmit_hash_policy
ip l set dev mlx_p0 master bond0
ip l set dev mlx_p1 master bond0
ifconfig mlx_p0 up
ifconfig mlx_p1 up

systemctl start openvswitch
ovs-vsctl set Open_vSwitch . other_config:hw-offload=true
systemctl restart openvswitch


mlx_pf0vf0 is assigned to vm. The tc rule show in_hw

# tc filter ls dev mlx_pf0vf0 ingress
filter protocol ip pref 2 flower
filter protocol ip pref 2 flower handle 0x1
  dst_mac 8e:c0:bd:bf:72:c3
  src_mac 52:54:00:00:12:75
  eth_type ipv4
  ip_tos 0/3
  ip_flags nofrag
  in_hw
    action order 1: tunnel_key set
    src_ip 172.168.152.75
    dst_ip 172.168.152.241
    key_id 1000 pipe
    index 2 ref 1 bind 1
 
    action order 2: mirred (Egress Redirect to device gre_sys) stolen
     index 2 ref 1 bind 1

In the vm:  the mlx5e driver enable xps default (by the way I think it is better not enable xps in default kernel can select queue by each flow),  in the lag mode different vf queue associate with hw PF.

with command taskset -c 2 ping 10.0.0.241

the packet can be offloaded , the outgoing pf is mlx_p0

but with command taskset -c 1 ping 10.0.0.241

the packet can't be offloaded, I can capture the packet on the mlx_pf0vf0, the outgoing pf is mlx_p1. Althrough the tc flower rule show in_hw


I check with the driver  both mlx_pf0vf0 and peer(mlx_p1) install the tc rule success

I think it's a problem of lag mode. Or I miss some configureation?


BR

wenxu





Powered by blists - more mailing lists