[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20210215152717.045112340@linuxfoundation.org>
Date: Mon, 15 Feb 2021 16:27:40 +0100
From: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
To: linux-kernel@...r.kernel.org
Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
stable@...r.kernel.org, Eric Dumazet <edumazet@...gle.com>,
Jian Yang <jianyang@...gle.com>,
Maxim Mikityanskiy <maximmi@...lanox.com>,
Saeed Mahameed <saeedm@...dia.com>,
Edward Cree <ecree.xilinx@...il.com>,
Alexander Lobakin <alobakin@...me>,
Jakub Kicinski <kuba@...nel.org>,
John Sperbeck <jsperbeck@...gle.com>
Subject: [PATCH 5.4 52/60] net: gro: do not keep too many GRO packets in napi->rx_list
From: Eric Dumazet <edumazet@...gle.com>
commit 8dc1c444df193701910f5e80b5d4caaf705a8fb0 upstream.
Commit c80794323e82 ("net: Fix packet reordering caused by GRO and
listified RX cooperation") had the unfortunate effect of adding
latencies in common workloads.
Before the patch, GRO packets were immediately passed to
upper stacks.
After the patch, we can accumulate quite a lot of GRO
packets (depdending on NAPI budget).
My fix is counting in napi->rx_count number of segments
instead of number of logical packets.
Fixes: c80794323e82 ("net: Fix packet reordering caused by GRO and listified RX cooperation")
Signed-off-by: Eric Dumazet <edumazet@...gle.com>
Bisected-by: John Sperbeck <jsperbeck@...gle.com>
Tested-by: Jian Yang <jianyang@...gle.com>
Cc: Maxim Mikityanskiy <maximmi@...lanox.com>
Reviewed-by: Saeed Mahameed <saeedm@...dia.com>
Reviewed-by: Edward Cree <ecree.xilinx@...il.com>
Reviewed-by: Alexander Lobakin <alobakin@...me>
Link: https://lore.kernel.org/r/20210204213146.4192368-1-eric.dumazet@gmail.com
Signed-off-by: Jakub Kicinski <kuba@...nel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
---
net/core/dev.c | 11 ++++++-----
1 file changed, 6 insertions(+), 5 deletions(-)
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -5275,10 +5275,11 @@ static void gro_normal_list(struct napi_
/* Queue one GRO_NORMAL SKB up for list processing. If batch size exceeded,
* pass the whole batch up to the stack.
*/
-static void gro_normal_one(struct napi_struct *napi, struct sk_buff *skb)
+static void gro_normal_one(struct napi_struct *napi, struct sk_buff *skb, int segs)
{
list_add_tail(&skb->list, &napi->rx_list);
- if (++napi->rx_count >= gro_normal_batch)
+ napi->rx_count += segs;
+ if (napi->rx_count >= gro_normal_batch)
gro_normal_list(napi);
}
@@ -5317,7 +5318,7 @@ static int napi_gro_complete(struct napi
}
out:
- gro_normal_one(napi, skb);
+ gro_normal_one(napi, skb, NAPI_GRO_CB(skb)->count);
return NET_RX_SUCCESS;
}
@@ -5608,7 +5609,7 @@ static gro_result_t napi_skb_finish(stru
{
switch (ret) {
case GRO_NORMAL:
- gro_normal_one(napi, skb);
+ gro_normal_one(napi, skb, 1);
break;
case GRO_DROP:
@@ -5696,7 +5697,7 @@ static gro_result_t napi_frags_finish(st
__skb_push(skb, ETH_HLEN);
skb->protocol = eth_type_trans(skb, skb->dev);
if (ret == GRO_NORMAL)
- gro_normal_one(napi, skb);
+ gro_normal_one(napi, skb, 1);
break;
case GRO_DROP:
Powered by blists - more mailing lists