[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <40968d30-f4c4-5f9c-5c6c-fe3d7e5571a3@ucloud.cn>
Date: Fri, 30 Oct 2020 15:50:21 +0800
From: wenxu <wenxu@...oud.cn>
To: Eli Cohen <elic@...dia.com>
Cc: Linux Kernel Network Developers <netdev@...r.kernel.org>
Subject: Re: mlx5_vdpa problem
Hi Eli,
Thanks for your reply.
I update the firmware to the lasted one
firmware-version: 22.28.4000 (MT_0000000430)
I find there are the same problems as my description.
It is the same for the test what your suggestion.
I did the experiment with two hosts so the host the
representor for your VF and the uplink representor are connected to an
ovs switch and other host is configured with ip address 10.0.0.7/24.
And the ovs enable hw-offload, So the packet don't go through rep port of VF.
But there are the same problems. I think it maybe a FW bug. Thx.
On 10/29/2020 8:45 PM, Eli Cohen wrote:
> On Thu, Oct 22, 2020 at 06:40:56PM +0800, wenxu wrote:
>
> Please make sure your firmware is updated.
>
> https://www.mellanox.com/support/firmware/connectx6dx
>
>> Hi mellanox team,
>>
>>
>> I test the mlx5 vdpa in linux-5.9 and meet several problem.
>>
>>
>> # lspci | grep Ether | grep Dx
>> b3:00.0 Ethernet controller: Mellanox Technologies MT2892 Family [ConnectX-6 Dx]
>> b3:00.1 Ethernet controller: Mellanox Technologies MT2892 Family [ConnectX-6 Dx]
>>
>> # ethtool -i net2
>> driver: mlx5e_rep
>> version: 5.9.0
>> firmware-version: 22.28.1002 (MT_0000000430)
>> expansion-rom-version:
>> bus-info: 0000:b3:00.0
>> supports-statistics: yes
>> supports-test: no
>> supports-eeprom-access: no
>> supports-register-dump: no
>> supports-priv-flags: no
>>
>>
>> init switchdev:
>>
>>
>> # echo 1 > /sys/class/net/net2/device/sriov_numvfs
>> # echo 0000:b3:00.2 > /sys/bus/pci/drivers/mlx5_core/unbind
>> # devlink dev eswitch set pci/0000:b3:00.0 mode switchdev encap enable
>>
>> # modprobe vdpa vhost-vdpa mlx5_vdpa
>>
>> # ip l set dev net2 vf 0 mac 52:90:01:00:02:13
>> # echo 0000:b3:00.2 > /sys/bus/pci/drivers/mlx5_core/bind
>>
>>
>> setup vm:
>>
>> # qemu-system-x86_64 -name test -enable-kvm -smp 16,sockets=2,cores=8,threads=1 -m 8192 -drive file=./centos7.qcow2,format=qcow2,if=none,id=drive-virtio-disk0 -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x7,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -netdev type=vhost-vdpa,vhostdev=/dev/vhost-vdpa-0,id=vhost-vdpa0 -device virtio-net-pci,netdev=vhost-vdpa0,page-per-vq=on,iommu_platform=on,id=net0,bus=pci.0,addr=0x3,disable-legacy=on -vnc 0.0.0.0:0
>>
>>
>> In the vm: virtio net device eth0 with ip address 10.0.0.75/24
>>
>> On the host: VF0 rep device pf0vf0 with ip address 10.0.0.7/24
>>
>>
>> problem 1:
>>
>> On the host:
>>
>> # ping 10.0.0.75
>>
>> Some times there will be loss packets.
>>
>> And in the VM:
>>
>> dmesg shows:
>>
>> eth0: bad tso: type 100, size: 0
>>
>> eth0: bad tso: type 10, size: 28
>>
>>
>> So I think maybe the vnet header is not init with 0? And Then I clear the gso_type, gso_size and flags in the virtio_net driver. There is no packets dropped.
>>
>>
>> problem 2:
>>
>> In the vm: iperf -s
>>
>> On the host: iperf -c 10.0.0.75 -t 100 -i 2.
>>
>>
>> The tcp connection can't established for the syn+ack with partail cum but not handle correct by hardware.
>>
>> After I set the csum off for eth0 in the vm, the problem is resolved. Although the mlx5_vnet support VIRTIO_NET_F_CSUM feature.
>>
>>
>>
>> problem 3:
>>
>>
>> The iperf perofrmance not stable before I disable the pf0vf0 tso offload
>>
>> #ethtool -K pf0vf0 tso off
>>
>>
>> I know the mlx5_vnet did not support feature VIRTIO_NET_F_GUEST_TSO4. But the hardware can't cut the big tso packet to several small tcp packet and send to virtio net device?
>>
>>
>>
>>
>> BR
>>
>> wenxu
>>
>>
>>
Powered by blists - more mailing lists