[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1769411695-18820-3-git-send-email-tariqt@nvidia.com>
Date: Mon, 26 Jan 2026 09:14:54 +0200
From: Tariq Toukan <tariqt@...dia.com>
To: Eric Dumazet <edumazet@...gle.com>, Jakub Kicinski <kuba@...nel.org>,
Paolo Abeni <pabeni@...hat.com>, Andrew Lunn <andrew+netdev@...n.ch>, "David
S. Miller" <davem@...emloft.net>
CC: Saeed Mahameed <saeedm@...dia.com>, Leon Romanovsky <leon@...nel.org>,
Tariq Toukan <tariqt@...dia.com>, Mark Bloch <mbloch@...dia.com>,
<netdev@...r.kernel.org>, <linux-rdma@...r.kernel.org>,
<linux-kernel@...r.kernel.org>, Gal Pressman <gal@...dia.com>, Moshe Shemesh
<moshe@...dia.com>
Subject: [PATCH net 2/3] net/mlx5e: TC, delete flows only for existing peers
From: Mark Bloch <mbloch@...dia.com>
When deleting TC steering flows, iterate only over actual devcom
peers instead of assuming all possible ports exist. This avoids
touching non-existent peers and ensures cleanup is limited to
devices the driver is currently connected to.
BUG: kernel NULL pointer dereference, address: 0000000000000008
#PF: supervisor write access in kernel mode
#PF: error_code(0x0002) - not-present page
PGD 133c8a067 P4D 0
Oops: Oops: 0002 [#1] SMP
CPU: 19 UID: 0 PID: 2169 Comm: tc Not tainted 6.18.0+ #156 NONE
Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS rel-1.16.0-0-gd239552ce722-prebuilt.qemu.org 04/01/2014
RIP: 0010:mlx5e_tc_del_fdb_peers_flow+0xbe/0x200 [mlx5_core]
Code: 00 00 a8 08 74 a8 49 8b 46 18 f6 c4 02 74 9f 4c 8d bf a0 12 00 00 4c 89 ff e8 0e e7 96 e1 49 8b 44 24 08 49 8b 0c 24 4c 89 ff <48> 89 41 08 48 89 08 49 89 2c 24 49 89 5c 24 08 e8 7d ce 96 e1 49
RSP: 0018:ff11000143867528 EFLAGS: 00010246
RAX: 0000000000000000 RBX: dead000000000122 RCX: 0000000000000000
RDX: ff11000143691580 RSI: ff110001026e5000 RDI: ff11000106f3d2a0
RBP: dead000000000100 R08: 00000000000003fd R09: 0000000000000002
R10: ff11000101c75690 R11: ff1100085faea178 R12: ff11000115f0ae78
R13: 0000000000000000 R14: ff11000115f0a800 R15: ff11000106f3d2a0
FS: 00007f35236bf740(0000) GS:ff110008dc809000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 0000000000000008 CR3: 0000000157a01001 CR4: 0000000000373eb0
Call Trace:
<TASK>
mlx5e_tc_del_flow+0x46/0x270 [mlx5_core]
mlx5e_flow_put+0x25/0x50 [mlx5_core]
mlx5e_delete_flower+0x2a6/0x3e0 [mlx5_core]
tc_setup_cb_reoffload+0x20/0x80
fl_reoffload+0x26f/0x2f0 [cls_flower]
? mlx5e_tc_reoffload_flows_work+0xc0/0xc0 [mlx5_core]
? mlx5e_tc_reoffload_flows_work+0xc0/0xc0 [mlx5_core]
tcf_block_playback_offloads+0x9e/0x1c0
tcf_block_unbind+0x7b/0xd0
tcf_block_setup+0x186/0x1d0
tcf_block_offload_cmd.isra.0+0xef/0x130
tcf_block_offload_unbind+0x43/0x70
__tcf_block_put+0x85/0x160
ingress_destroy+0x32/0x110 [sch_ingress]
__qdisc_destroy+0x44/0x100
qdisc_graft+0x22b/0x610
tc_get_qdisc+0x183/0x4d0
rtnetlink_rcv_msg+0x2d7/0x3d0
? rtnl_calcit.isra.0+0x100/0x100
netlink_rcv_skb+0x53/0x100
netlink_unicast+0x249/0x320
? __alloc_skb+0x102/0x1f0
netlink_sendmsg+0x1e3/0x420
__sock_sendmsg+0x38/0x60
____sys_sendmsg+0x1ef/0x230
? copy_msghdr_from_user+0x6c/0xa0
___sys_sendmsg+0x7f/0xc0
? ___sys_recvmsg+0x8a/0xc0
? __sys_sendto+0x119/0x180
__sys_sendmsg+0x61/0xb0
do_syscall_64+0x55/0x640
entry_SYSCALL_64_after_hwframe+0x4b/0x53
RIP: 0033:0x7f35238bb764
Code: 15 b9 86 0c 00 f7 d8 64 89 02 b8 ff ff ff ff eb bf 0f 1f 44 00 00 f3 0f 1e fa 80 3d e5 08 0d 00 00 74 13 b8 2e 00 00 00 0f 05 <48> 3d 00 f0 ff ff 77 4c c3 0f 1f 00 55 48 89 e5 48 83 ec 20 89 55
RSP: 002b:00007ffed4c35638 EFLAGS: 00000202 ORIG_RAX: 000000000000002e
RAX: ffffffffffffffda RBX: 000055a2efcc75e0 RCX: 00007f35238bb764
RDX: 0000000000000000 RSI: 00007ffed4c356a0 RDI: 0000000000000003
RBP: 00007ffed4c35710 R08: 0000000000000010 R09: 00007f3523984b20
R10: 0000000000000004 R11: 0000000000000202 R12: 00007ffed4c35790
R13: 000000006947df8f R14: 000055a2efcc75e0 R15: 00007ffed4c35780
Fixes: 9be6c21fdcf8 ("net/mlx5e: Handle offloads flows per peer")
Signed-off-by: Mark Bloch <mbloch@...dia.com>
Reviewed-by: Shay Drori <shayd@...dia.com>
Signed-off-by: Tariq Toukan <tariqt@...dia.com>
---
.../net/ethernet/mellanox/mlx5/core/en_tc.c | 19 +++++++++++++------
1 file changed, 13 insertions(+), 6 deletions(-)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
index a8773b2342c2..424786f489ec 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
@@ -2147,11 +2147,14 @@ static void mlx5e_tc_del_fdb_peer_flow(struct mlx5e_tc_flow *flow,
static void mlx5e_tc_del_fdb_peers_flow(struct mlx5e_tc_flow *flow)
{
+ struct mlx5_devcom_comp_dev *devcom;
+ struct mlx5_devcom_comp_dev *pos;
+ struct mlx5_eswitch *peer_esw;
int i;
- for (i = 0; i < MLX5_MAX_PORTS; i++) {
- if (i == mlx5_get_dev_index(flow->priv->mdev))
- continue;
+ devcom = flow->priv->mdev->priv.eswitch->devcom;
+ mlx5_devcom_for_each_peer_entry(devcom, peer_esw, pos) {
+ i = mlx5_get_dev_index(peer_esw->dev);
mlx5e_tc_del_fdb_peer_flow(flow, i);
}
}
@@ -5513,12 +5516,16 @@ int mlx5e_tc_num_filters(struct mlx5e_priv *priv, unsigned long flags)
void mlx5e_tc_clean_fdb_peer_flows(struct mlx5_eswitch *esw)
{
+ struct mlx5_devcom_comp_dev *devcom;
+ struct mlx5_devcom_comp_dev *pos;
struct mlx5e_tc_flow *flow, *tmp;
+ struct mlx5_eswitch *peer_esw;
int i;
- for (i = 0; i < MLX5_MAX_PORTS; i++) {
- if (i == mlx5_get_dev_index(esw->dev))
- continue;
+ devcom = esw->devcom;
+
+ mlx5_devcom_for_each_peer_entry(devcom, peer_esw, pos) {
+ i = mlx5_get_dev_index(peer_esw->dev);
list_for_each_entry_safe(flow, tmp, &esw->offloads.peer_flows[i], peer[i])
mlx5e_tc_del_fdb_peers_flow(flow);
}
--
2.40.1
Powered by blists - more mailing lists