[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250817202323.308604-5-mbloch@nvidia.com>
Date: Sun, 17 Aug 2025 23:23:20 +0300
From: Mark Bloch <mbloch@...dia.com>
To: Eric Dumazet <edumazet@...gle.com>, Jakub Kicinski <kuba@...nel.org>,
Paolo Abeni <pabeni@...hat.com>, Andrew Lunn <andrew+netdev@...n.ch>, "David
S. Miller" <davem@...emloft.net>
CC: Tariq Toukan <tariqt@...dia.com>, Leon Romanovsky <leon@...nel.org>,
"Saeed Mahameed" <saeedm@...dia.com>, <netdev@...r.kernel.org>,
<linux-rdma@...r.kernel.org>, <linux-kernel@...r.kernel.org>, Gal Pressman
<gal@...dia.com>, Yevgeny Kliteynik <kliteyn@...dia.com>, Vlad Dogaru
<vdogaru@...dia.com>, Mark Bloch <mbloch@...dia.com>
Subject: [PATCH net 4/7] net/mlx5: HWS, prevent rehash from filling up the queues
From: Yevgeny Kliteynik <kliteyn@...dia.com>
While moving the rules during rehash, CQ is not drained. The flush
and drain happens only when all the rules of a certain queue have been
moved. This behaviour can lead to accumulating large quantity of rules
that haven't got their completion yet, and eventually will fill up
the queue and will cause the rehash to fail.
Fix this problem by requiring drain once the number of outstanding
completions reaches a certain threshold.
Fixes: ef94799a8741 ("net/mlx5: HWS, rework rehash loop")
Signed-off-by: Yevgeny Kliteynik <kliteyn@...dia.com>
Reviewed-by: Vlad Dogaru <vdogaru@...dia.com>
Signed-off-by: Mark Bloch <mbloch@...dia.com>
---
drivers/net/ethernet/mellanox/mlx5/core/steering/hws/bwc.c | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/bwc.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/bwc.c
index 0219a49b2326..2a59be11fe55 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/bwc.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/bwc.c
@@ -84,6 +84,7 @@ hws_bwc_matcher_move_all_simple(struct mlx5hws_bwc_matcher *bwc_matcher)
struct list_head *rules_list;
u32 pending_rules;
int i, ret = 0;
+ bool drain;
mlx5hws_bwc_rule_fill_attr(bwc_matcher, 0, 0, &rule_attr);
@@ -111,10 +112,12 @@ hws_bwc_matcher_move_all_simple(struct mlx5hws_bwc_matcher *bwc_matcher)
}
pending_rules++;
+ drain = pending_rules >=
+ hws_bwc_get_burst_th(ctx, rule_attr.queue_id);
ret = mlx5hws_bwc_queue_poll(ctx,
rule_attr.queue_id,
&pending_rules,
- false);
+ drain);
if (unlikely(ret)) {
if (ret == -ETIMEDOUT) {
mlx5hws_err(ctx,
--
2.34.1
Powered by blists - more mailing lists