[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20260118004024.1044368-1-kuba@kernel.org>
Date: Sat, 17 Jan 2026 16:40:24 -0800
From: Jakub Kicinski <kuba@...nel.org>
To: rkannoth@...vell.com
Cc: Jakub Kicinski <kuba@...nel.org>,
linux-kernel@...r.kernel.org,
netdev@...r.kernel.org
Subject: Re: [net-next,v4,05/13] octeontx2-af: npc: cn20k: Allocate default MCAM indexes
This is an AI-generated review of your patch. The human sending this
email has considered the AI review valid, or at least plausible.
For local reproduction steps see https://netdev-ai.bots.linux.dev/ai-local.html
---
octeontx2-af: npc: cn20k: Allocate default MCAM indexes
This patch allocates MCAM indexes for promiscuous, multicast, broadcast,
and unicast traffic when NIX LF is attached for CN20K silicon, replacing
the previous static reservation approach.
> diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu.c
> index 2d78e08f985f..df02caedc020 100644
> --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu.c
> +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu.c
[ ... ]
> @@ -1738,8 +1745,14 @@ int rvu_mbox_handler_attach_resources(struct rvu *rvu,
> if (attach->npalf)
> rvu_attach_block(rvu, pcifunc, BLKTYPE_NPA, 1, attach);
>
> - if (attach->nixlf)
> + if (attach->nixlf) {
> rvu_attach_block(rvu, pcifunc, BLKTYPE_NIX, 1, attach);
> + if (is_cn20k(rvu->pdev)) {
> + err = npc_cn20k_dft_rules_alloc(rvu, pcifunc);
> + if (err)
> + goto exit;
^^^^
When npc_cn20k_dft_rules_alloc() fails after rvu_attach_block() has
already attached the NIX LF, the error path jumps to exit without
detaching NIX. The caller (otx2_init_rsrc in otx2_pf.c) assumes that on
error nothing was attached, so it does not call otx2_detach_resources().
Could this leave NIX LF attached without proper cleanup on allocation
failure?
> + }
> + }
>
> if (attach->sso) {
[ ... ]
> diff --git a/drivers/net/ethernet/marvell/octeontx2/af/cn20k/npc.c b/drivers/net/ethernet/marvell/octeontx2/af/cn20k/npc.c
> index c3f41f4ea99c..33be0bf0b481 100644
> --- a/drivers/net/ethernet/marvell/octeontx2/af/cn20k/npc.c
> +++ b/drivers/net/ethernet/marvell/octeontx2/af/cn20k/npc.c
[ ... ]
> @@ -2473,6 +2836,7 @@ static int npc_priv_init(struct rvu *rvu)
> xa_init_flags(&npc_priv.xa_sb_used, XA_FLAGS_ALLOC);
> xa_init_flags(&npc_priv.xa_sb_free, XA_FLAGS_ALLOC);
> xa_init_flags(&npc_priv.xa_idx2pf_map, XA_FLAGS_ALLOC);
> xa_init_flags(&npc_priv.xa_pf_map, XA_FLAGS_ALLOC);
> + xa_init_flags(&npc_priv.xa_pf2dfl_rmap, XA_FLAGS_ALLOC);
>
> if (npc_create_srch_order(num_subbanks)) {
> kfree(npc_priv.sb);
> return -ENOMEM;
^^^^
At this point npc_subbank_init() has been called for all subbanks (in
the loop at lines 2850-2851), which calls xa_store() to populate
xa_sb_free. This error path frees npc_priv.sb but does not call
xa_destroy() on the xarrays that now contain entries.
Similarly, the error path after kcalloc of xa_pf2idx_map fails (around
line 2857-2860) also skips xa_destroy() on the populated xarrays.
Could these error paths leak the xarray entries stored by
npc_subbank_init()?
> }
[ ... ]
Powered by blists - more mailing lists