[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20240214151204.2976-1-paul.barker.ct@bp.renesas.com>
Date: Wed, 14 Feb 2024 15:12:04 +0000
From: Paul Barker <paul.barker.ct@...renesas.com>
To: Sergey Shtylyov <s.shtylyov@....ru>,
"David S . Miller" <davem@...emloft.net>,
Eric Dumazet <edumazet@...gle.com>,
Jakub Kicinski <kuba@...nel.org>,
Paolo Abeni <pabeni@...hat.com>
Cc: Paul Barker <paul.barker.ct@...renesas.com>,
Yoshihiro Shimoda <yoshihiro.shimoda.uh@...esas.com>,
Wolfram Sang <wsa+renesas@...g-engineering.com>,
Nikita Yushchenko <nikita.yoush@...entembedded.com>,
Uwe Kleine-König <u.kleine-koenig@...gutronix.de>,
Claudiu Beznea <claudiu.beznea.uj@...renesas.com>,
Lad Prabhakar <prabhakar.mahadev-lad.rj@...renesas.com>,
Biju Das <biju.das.jz@...renesas.com>,
netdev@...r.kernel.org,
linux-renesas-soc@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: [PATCH net v3] net: ravb: Count packets instead of descriptors in GbEth RX path
The units of "work done" in the RX path should be packets instead of
descriptors, as large packets can be spread over multiple descriptors.
Fixes: 1c59eb678cbd ("ravb: Fillup ravb_rx_gbeth() stub")
Signed-off-by: Paul Barker <paul.barker.ct@...renesas.com>
---
This patch has been broken out from my previous series "Improve GbEth
performance on Renesas RZ/G2L and related SoCs" and submitted as a
bugfix as requested by Sergey. I've labeled it as 'v3' so the ordering
is clear. Remaining patches from the series will follow once we've done
gPTP testing.
drivers/net/ethernet/renesas/ravb_main.c | 22 +++++++++-------------
1 file changed, 9 insertions(+), 13 deletions(-)
diff --git a/drivers/net/ethernet/renesas/ravb_main.c b/drivers/net/ethernet/renesas/ravb_main.c
index 0e3731f50fc2..f7566cfa45ca 100644
--- a/drivers/net/ethernet/renesas/ravb_main.c
+++ b/drivers/net/ethernet/renesas/ravb_main.c
@@ -772,29 +772,25 @@ static bool ravb_rx_gbeth(struct net_device *ndev, int *quota, int q)
struct ravb_rx_desc *desc;
struct sk_buff *skb;
dma_addr_t dma_addr;
+ int rx_packets = 0;
u8 desc_status;
- int boguscnt;
u16 pkt_len;
u8 die_dt;
int entry;
int limit;
+ int i;
entry = priv->cur_rx[q] % priv->num_rx_ring[q];
- boguscnt = priv->dirty_rx[q] + priv->num_rx_ring[q] - priv->cur_rx[q];
+ limit = priv->dirty_rx[q] + priv->num_rx_ring[q] - priv->cur_rx[q];
stats = &priv->stats[q];
- boguscnt = min(boguscnt, *quota);
- limit = boguscnt;
desc = &priv->gbeth_rx_ring[entry];
- while (desc->die_dt != DT_FEMPTY) {
+ for (i = 0; i < limit && rx_packets < *quota && desc->die_dt != DT_FEMPTY; i++) {
/* Descriptor type must be checked before all other reads */
dma_rmb();
desc_status = desc->msc;
pkt_len = le16_to_cpu(desc->ds_cc) & RX_DS;
- if (--boguscnt < 0)
- break;
-
/* We use 0-byte descriptors to mark the DMA mapping errors */
if (!pkt_len)
continue;
@@ -820,7 +816,7 @@ static bool ravb_rx_gbeth(struct net_device *ndev, int *quota, int q)
skb_put(skb, pkt_len);
skb->protocol = eth_type_trans(skb, ndev);
napi_gro_receive(&priv->napi[q], skb);
- stats->rx_packets++;
+ rx_packets++;
stats->rx_bytes += pkt_len;
break;
case DT_FSTART:
@@ -848,7 +844,7 @@ static bool ravb_rx_gbeth(struct net_device *ndev, int *quota, int q)
eth_type_trans(priv->rx_1st_skb, ndev);
napi_gro_receive(&priv->napi[q],
priv->rx_1st_skb);
- stats->rx_packets++;
+ rx_packets++;
stats->rx_bytes += pkt_len;
break;
}
@@ -887,9 +883,9 @@ static bool ravb_rx_gbeth(struct net_device *ndev, int *quota, int q)
desc->die_dt = DT_FEMPTY;
}
- *quota -= limit - (++boguscnt);
-
- return boguscnt <= 0;
+ stats->rx_packets += rx_packets;
+ *quota -= rx_packets;
+ return *quota == 0;
}
/* Packet receive function for Ethernet AVB */
--
2.43.1
Powered by blists - more mailing lists