[<prev] [next>] [day] [month] [year] [list]
Message-ID: <CAJJdyE1THwu70UrErbQQgmDPRrBjapQLdgv-epgEXMTyfRoQEA@mail.gmail.com>
Date: Tue, 24 Mar 2020 20:35:25 -0400
From: Eduard Guzovsky <eguzovsky@...il.com>
To: netdev@...r.kernel.org
Subject: Significant performance degradation after updating i40e driver vrom
version 2.1.14-k to 2.3.2-k
I have two Linux boxes connected via dedicated 40Gb link, both using
Intel XL710 card. I ran a unidirectional single tcp stream performance
tests using iperf2.
In these tests Box 1 acts as a client (sender) and Box 2 as a server
(receiver). No special ethtool adjustments were made on ethernet
devices - I was using the defaults.
In all the tests nothing changed in the Box 2 setup. It has Intel Xeon
E5-1650 v3 @ 3.50GHz with 64 GB of memory. It ran CentOS 6, kernel
version 2.6.32-754.9.1.el6, i40e driver version 1.5.10-k.
Box 1 has Intel Xeon CPU E3-1230 v3 @ 3.50GHz with 32 GB of memory. In
the first test Box 1 ran CentOS 7, kernel version 3.10.0-862.11.6.el7,
i40e driver version 2.1.14-k. In this test I got 39.6 Gb/sec transmit
rate.
In the second test Box 1 software got upgraded to kernel version
3.10.0-957.1.3.el7, i40e driver version 2.3.2-k. After the upgrade
transmit performance dropped to about 20Gb/sec.
Apart from the performance, a significant change between these two
test runs was the ethernet device interrupt rate. In the first test
(showing good performance) I got about 20K interrupts/sec, in the
second test (showing bad performance) - about 11K interrupts/sec. It
looked like something changed in the adaptive-tx interrupts mechanism.
I turned off adaptive-tx and played with tx-usecs value and was able
to get transmit performance to about 35 Gb/sec.
I then noticed that adaptive ITR algorithm changed significantly
between 2.1.14-k and 2.3.2-k versions of the i40e driver. Here is the
name of patch: "i40e/i40evf: Add support for new mechanism of updating
adaptive ITR" (commit a0073a4b8b5906b2a7eab5e9d4a91759b56bc96f). It is
a very cool new algorithm, but I think that in my particular scenario
it caused the slowdown.
What's the best way to get the performance back? Is it possible to
adjust (without modifying the source code) the adaptive ITR
parameters, so that I could try to find suitable values for my case
and get the performance back? Or is disabling adaptive ITR the only
way to go?
Thanks,
-Ed
Powered by blists - more mailing lists