[<prev] [next>] [day] [month] [year] [list]
Message-Id: <9554367f-5ed5-42f1-af77-c72534985f04@www.fastmail.com>
Date: Thu, 28 Feb 2019 03:55:45 -0500
From: "Jingrong Chen" <jchendi@....ust.hk>
To: netdev@...r.kernel.org, nic-support@...lanox.com,
jzhangcs@...nect.ust.hk, shuaa@....ust.hk
Subject: Problem of rate limit with VXLAN offload Mellanox ConnectX-5 NIC
Hi all,
We ran into a performance issue when configuring the ConnectX-5 NICs to do both tc rate limiting and VXLAN encapsulation/decapsulation simultaneously.
Our test setting is as follows:
We have two servers, each installed one ConnectX-5 NIC. Each server runs one VM. We use ib_write_bw to let one VM send RDMA traffic to the other VM. The traffic priority is configured to be 3.
1) We configure tc rate limiting using command "mlnx_qos -i rdma0 -r 0,0,0,X,0,0,0,0" (i.e., limit the traffic rate to be X Gbps).
2) We enable SR-IOV at the NIC and follow the ASAP2 Hardware Offloading for vSwitches User Manual to configure vxlan encapsulation/decapsulation for the RDMA traffic.
Our test result is as below:
Rate limit (Gbps) Throughput (Gbps)
X = 40 19.09 (throughput only achieved half of the configured rate)
X = 60 28.66
X = 80 38.25
X = 100 47.77
unlimited 87.68
* Note that when only enabling tc rate limiting, there is no such throughput downgrade issue.
It would be of great help if anyone could help us to resolve this problem. Thanks.
FYI, our machines run Ubuntu 18.04, the kernel version is 4.15.0, CX5 firmware is 16.24.1000, and the driver is MLNX_OFED_LINUX-4.5-1.0.1.0.
Regards,
Jingrong
Powered by blists - more mailing lists