[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <660415CE-6748-4749-84D6-7007F69D8EFB@redhat.com>
Date: Fri, 23 Aug 2019 14:10:45 +0200
From: "Eelco Chaudron" <echaudro@...hat.com>
To: "Ilya Maximets" <i.maximets@...sung.com>
Cc: netdev@...r.kernel.org, linux-kernel@...r.kernel.org,
bpf@...r.kernel.org, "David S. Miller" <davem@...emloft.net>,
"Björn Töpel" <bjorn.topel@...el.com>,
"Magnus Karlsson" <magnus.karlsson@...el.com>,
"Jakub Kicinski" <jakub.kicinski@...ronome.com>,
"Alexei Starovoitov" <ast@...nel.org>,
"Daniel Borkmann" <daniel@...earbox.net>,
"Jeff Kirsher" <jeffrey.t.kirsher@...el.com>,
intel-wired-lan@...ts.osuosl.org,
"William Tu" <u9012063@...il.com>,
"Alexander Duyck" <alexander.duyck@...il.com>
Subject: Re: [PATCH net v3] ixgbe: fix double clean of tx descriptors with xdp
On 22 Aug 2019, at 19:12, Ilya Maximets wrote:
> Tx code doesn't clear the descriptors' status after cleaning.
> So, if the budget is larger than number of used elems in a ring, some
> descriptors will be accounted twice and xsk_umem_complete_tx will move
> prod_tail far beyond the prod_head breaking the completion queue ring.
>
> Fix that by limiting the number of descriptors to clean by the number
> of used descriptors in the tx ring.
>
> 'ixgbe_clean_xdp_tx_irq()' function refactored to look more like
> 'ixgbe_xsk_clean_tx_ring()' since we're allowed to directly use
> 'next_to_clean' and 'next_to_use' indexes.
>
> Fixes: 8221c5eba8c1 ("ixgbe: add AF_XDP zero-copy Tx support")
> Signed-off-by: Ilya Maximets <i.maximets@...sung.com>
> ---
>
> Version 3:
> * Reverted some refactoring made for v2.
> * Eliminated 'budget' for tx clean.
> * prefetch returned.
>
> Version 2:
> * 'ixgbe_clean_xdp_tx_irq()' refactored to look more like
> 'ixgbe_xsk_clean_tx_ring()'.
>
> drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c | 29
> ++++++++------------
> 1 file changed, 11 insertions(+), 18 deletions(-)
>
Did some test with and without the fix applied. For PVP the results are
a little different depending on the packet size (note this is a single
run, no deviation).
For the same physical port in and out it’s faster! Note this was OVS
AF_XDP using a XENA tester at 10G wire speed.
+--------------------------------------------------------------------------------+
| Physical to Virtual to Physical test, L3 flows[port redirect]
|
+-----------------+--------------------------------------------------------------+
| | Packet size
|
+-----------------+--------+--------+--------+--------+--------+--------+--------+
| Number of flows | 64 | 128 | 256 | 512 | 768 | 1024
| 1514 |
+-----------------+--------+--------+--------+--------+--------+--------+--------+
| [NO FIX] 1000 | 739161 | 700091 | 690034 | 659894 | 618128 | 594223
| 537504 |
+-----------------+--------+--------+--------+--------+--------+--------+--------+
| [FIX] 1000 | 742317 | 708391 | 689952 | 658034 | 626056 | 587653
| 530885 |
+-----------------+--------+--------+--------+--------+--------+--------+--------+
+--------------------------------------------------------------------------------------+
| Physical loopback test, L3 flows[port redirect]
|
+-----------------+--------------------------------------------------------------------+
| | Packet size
|
+-----------------+---------+---------+---------+---------+---------+---------+--------+
| Number of flows | 64 | 128 | 256 | 512 | 768 |
1024 | 1514 |
+-----------------+---------+---------+---------+---------+---------+---------+--------+
| [NO FIX] 1000 | 2573298 | 2227578 | 2514318 | 2298204 | 1081861 |
1015173 | 788081 |
+-----------------+---------+---------+---------+---------+---------+---------+--------+
| [FIX] 1000 | 3343188 | 3234993 | 3151833 | 2349597 | 1586276 |
1197304 | 814854 |
+-----------------+---------+---------+---------+---------+---------+---------+--------+
Tested-by: Eelco Chaudron <echaudro@...hat.com>
Powered by blists - more mailing lists