[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1583143142-7958-2-git-send-email-sunil.kovvuri@gmail.com>
Date: Mon, 2 Mar 2020 15:29:00 +0530
From: sunil.kovvuri@...il.com
To: netdev@...r.kernel.org
Cc: davem@...emloft.net, Sunil Goutham <sgoutham@...vell.com>
Subject: [PATCH 1/3] net: thunderx: Adjust CQE_RX drop levels for better performance
From: Sunil Goutham <sgoutham@...vell.com>
With the current RX RED/DROP levels of 192/184 for CQE_RX, when
packet incoming rate is high, LLC is getting polluted resulting
in more cache misses and higher latency in packet processing. This
slows down the whole process and performance loss. Hence reduced
the levels to 224/216 (ie for a CQ size of 1024, Rx pkts will be
red dropped or dropped when unused CQE are less than 128/160 respectively)
Signed-off-by: Sunil Goutham <sgoutham@...vell.com>
---
drivers/net/ethernet/cavium/thunder/nicvf_queues.h | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/net/ethernet/cavium/thunder/nicvf_queues.h b/drivers/net/ethernet/cavium/thunder/nicvf_queues.h
index bc2427c..2460451 100644
--- a/drivers/net/ethernet/cavium/thunder/nicvf_queues.h
+++ b/drivers/net/ethernet/cavium/thunder/nicvf_queues.h
@@ -100,8 +100,8 @@
* RED accepts pkt if unused CQE < 2304 & >= 2560
* DROPs pkts if unused CQE < 2304
*/
-#define RQ_PASS_CQ_LVL 192ULL
-#define RQ_DROP_CQ_LVL 184ULL
+#define RQ_PASS_CQ_LVL 224ULL
+#define RQ_DROP_CQ_LVL 216ULL
/* RED and Backpressure levels of RBDR for pkt reception
* For RBDR, level is a measure of fullness i.e 0x0 means empty
--
2.7.4
Powered by blists - more mailing lists