[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20201029112956.GA139728@mtl-vdi-166.wap.labs.mlnx>
Date: Thu, 29 Oct 2020 13:29:56 +0200
From: Eli Cohen <elic@...dia.com>
To: wenxu <wenxu@...oud.cn>
CC: Linux Kernel Network Developers <netdev@...r.kernel.org>
Subject: Re: mlx5_vdpa problem
On Thu, Oct 22, 2020 at 06:40:56PM +0800, wenxu wrote:
>
> Hi mellanox team,
>
>
> I test the mlx5 vdpa in linux-5.9 and meet several problem.
>
>
> # lspci | grep Ether | grep Dx
> b3:00.0 Ethernet controller: Mellanox Technologies MT2892 Family [ConnectX-6 Dx]
> b3:00.1 Ethernet controller: Mellanox Technologies MT2892 Family [ConnectX-6 Dx]
>
> # ethtool -i net2
> driver: mlx5e_rep
> version: 5.9.0
> firmware-version: 22.28.1002 (MT_0000000430)
> expansion-rom-version:
> bus-info: 0000:b3:00.0
> supports-statistics: yes
> supports-test: no
> supports-eeprom-access: no
> supports-register-dump: no
> supports-priv-flags: no
>
>
> init switchdev:
>
>
> # echo 1 > /sys/class/net/net2/device/sriov_numvfs
> # echo 0000:b3:00.2 > /sys/bus/pci/drivers/mlx5_core/unbind
> # devlink dev eswitch set pci/0000:b3:00.0 mode switchdev encap enable
>
> # modprobe vdpa vhost-vdpa mlx5_vdpa
>
> # ip l set dev net2 vf 0 mac 52:90:01:00:02:13
> # echo 0000:b3:00.2 > /sys/bus/pci/drivers/mlx5_core/bind
>
>
> setup vm:
>
> # qemu-system-x86_64 -name test -enable-kvm -smp 16,sockets=2,cores=8,threads=1 -m 8192 -drive file=./centos7.qcow2,format=qcow2,if=none,id=drive-virtio-disk0 -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x7,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -netdev type=vhost-vdpa,vhostdev=/dev/vhost-vdpa-0,id=vhost-vdpa0 -device virtio-net-pci,netdev=vhost-vdpa0,page-per-vq=on,iommu_platform=on,id=net0,bus=pci.0,addr=0x3,disable-legacy=on -vnc 0.0.0.0:0
>
>
> In the vm: virtio net device eth0 with ip address 10.0.0.75/24
>
> On the host: VF0 rep device pf0vf0 with ip address 10.0.0.7/24
>
>
> problem 1:
>
> On the host:
>
> # ping 10.0.0.75
>
> Some times there will be loss packets.
>
> And in the VM:
>
> dmesg shows:
>
> eth0: bad tso: type 100, size: 0
>
> eth0: bad tso: type 10, size: 28
>
>
> So I think maybe the vnet header is not init with 0? And Then I clear the gso_type, gso_size and flags in the virtio_net driver. There is no packets dropped.
Hi wenxu, thanks for reporting this.
Usually, you would not assign IP address to the representor as it
represents a switch port. Nevertheless, I will try to reproduce this
here.
Could you repeat your experiment with two hosts so the host the
representor for your VF and the uplink representor are connected to an
ovs switch and other host is configured with ip address 10.0.0.7/24?
Also, can you send the firmware version you're using?
>
>
> problem 2:
>
> In the vm: iperf -s
>
> On the host: iperf -c 10.0.0.75 -t 100 -i 2.
>
>
> The tcp connection can't established for the syn+ack with partail cum but not handle correct by hardware.
>
> After I set the csum off for eth0 in the vm, the problem is resolved. Although the mlx5_vnet support VIRTIO_NET_F_CSUM feature.
>
>
>
> problem 3:
>
>
> The iperf perofrmance not stable before I disable the pf0vf0 tso offload
>
> #ethtool -K pf0vf0 tso off
>
>
> I know the mlx5_vnet did not support feature VIRTIO_NET_F_GUEST_TSO4. But the hardware can't cut the big tso packet to several small tcp packet and send to virtio net device?
>
>
>
>
> BR
>
> wenxu
>
>
>
Powered by blists - more mailing lists