[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <258f86a8-d6ae-010a-11f8-c155b1df4723@ucloud.cn>
Date: Thu, 22 Oct 2020 18:40:56 +0800
From: wenxu <wenxu@...oud.cn>
To: Eli Cohen <elic@...dia.com>
Cc: Linux Kernel Network Developers <netdev@...r.kernel.org>
Subject: mlx5_vdpa problem
Hi mellanox team,
I test the mlx5 vdpa in linux-5.9 and meet several problem.
# lspci | grep Ether | grep Dx
b3:00.0 Ethernet controller: Mellanox Technologies MT2892 Family [ConnectX-6 Dx]
b3:00.1 Ethernet controller: Mellanox Technologies MT2892 Family [ConnectX-6 Dx]
# ethtool -i net2
driver: mlx5e_rep
version: 5.9.0
firmware-version: 22.28.1002 (MT_0000000430)
expansion-rom-version:
bus-info: 0000:b3:00.0
supports-statistics: yes
supports-test: no
supports-eeprom-access: no
supports-register-dump: no
supports-priv-flags: no
init switchdev:
# echo 1 > /sys/class/net/net2/device/sriov_numvfs
# echo 0000:b3:00.2 > /sys/bus/pci/drivers/mlx5_core/unbind
# devlink dev eswitch set pci/0000:b3:00.0 mode switchdev encap enable
# modprobe vdpa vhost-vdpa mlx5_vdpa
# ip l set dev net2 vf 0 mac 52:90:01:00:02:13
# echo 0000:b3:00.2 > /sys/bus/pci/drivers/mlx5_core/bind
setup vm:
# qemu-system-x86_64 -name test -enable-kvm -smp 16,sockets=2,cores=8,threads=1 -m 8192 -drive file=./centos7.qcow2,format=qcow2,if=none,id=drive-virtio-disk0 -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x7,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -netdev type=vhost-vdpa,vhostdev=/dev/vhost-vdpa-0,id=vhost-vdpa0 -device virtio-net-pci,netdev=vhost-vdpa0,page-per-vq=on,iommu_platform=on,id=net0,bus=pci.0,addr=0x3,disable-legacy=on -vnc 0.0.0.0:0
In the vm: virtio net device eth0 with ip address 10.0.0.75/24
On the host: VF0 rep device pf0vf0 with ip address 10.0.0.7/24
problem 1:
On the host:
# ping 10.0.0.75
Some times there will be loss packets.
And in the VM:
dmesg shows:
eth0: bad tso: type 100, size: 0
eth0: bad tso: type 10, size: 28
So I think maybe the vnet header is not init with 0? And Then I clear the gso_type, gso_size and flags in the virtio_net driver. There is no packets dropped.
problem 2:
In the vm: iperf -s
On the host: iperf -c 10.0.0.75 -t 100 -i 2.
The tcp connection can't established for the syn+ack with partail cum but not handle correct by hardware.
After I set the csum off for eth0 in the vm, the problem is resolved. Although the mlx5_vnet support VIRTIO_NET_F_CSUM feature.
problem 3:
The iperf perofrmance not stable before I disable the pf0vf0 tso offload
#ethtool -K pf0vf0 tso off
I know the mlx5_vnet did not support feature VIRTIO_NET_F_GUEST_TSO4. But the hardware can't cut the big tso packet to several small tcp packet and send to virtio net device?
BR
wenxu
Powered by blists - more mailing lists