[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20260118004017.1044291-1-kuba@kernel.org>
Date: Sat, 17 Jan 2026 16:40:17 -0800
From: Jakub Kicinski <kuba@...nel.org>
To: rkannoth@...vell.com
Cc: Jakub Kicinski <kuba@...nel.org>,
linux-kernel@...r.kernel.org,
netdev@...r.kernel.org
Subject: Re: [net-next,v4,01/13] octeontx2-af: npc: cn20k: Index management
This is an AI-generated review of your patch. The human sending this
email has considered the AI review valid, or at least plausible.
For local reproduction steps see https://netdev-ai.bots.linux.dev/ai-local.html
---
octeontx2-af: npc: cn20k: Index management
This patch adds MCAM index management for CN20K silicon, implementing
contiguous and non-contiguous allocation schemes with reference, limit,
and priority parameters across the new subbank architecture.
> diff --git a/drivers/net/ethernet/marvell/octeontx2/af/cn20k/npc.c b/drivers/net/ethernet/marvell/octeontx2/af/cn20k/npc.c
> new file mode 100644
> index 000000000000..ebccc780ab8e
> --- /dev/null
> +++ b/drivers/net/ethernet/marvell/octeontx2/af/cn20k/npc.c
[ ... ]
> +static int npc_idx_free(struct rvu *rvu, u16 *mcam_idx, int count,
> + bool maps_del)
> +{
> + struct npc_subbank *sb;
> + int idx, i;
> + bool ret;
> + int rc;
> +
> + for (i = 0; i < count; i++) {
> + rc = npc_mcam_idx_2_subbank_idx(rvu, mcam_idx[i],
> + &sb, &idx);
> + if (rc)
> + return rc;
> +
> + ret = npc_subbank_free(rvu, sb, idx);
> + if (ret)
> + return -EINVAL;
If an error occurs partway through this loop, for example when freeing
entry i out of count entries, entries from i+1 to count-1 remain allocated
but are never freed. Could this leak MCAM slots when npc_idx_free() is
called from error paths in npc_multi_subbank_ref_alloc() or
npc_subbank_noref_alloc()?
> +
> + if (!maps_del)
> + continue;
> +
> + rc = npc_del_from_pf_maps(rvu, mcam_idx[i]);
> + if (rc)
> + return rc;
> + }
> +
> + return 0;
> +}
[ ... ]
> +static int npc_priv_init(struct rvu *rvu)
> +{
[ ... ]
> + npc_priv.sb = kcalloc(num_subbanks, sizeof(struct npc_subbank),
> + GFP_KERNEL);
> + if (!npc_priv.sb)
> + return -ENOMEM;
> +
> + xa_init_flags(&npc_priv.xa_sb_used, XA_FLAGS_ALLOC);
> + xa_init_flags(&npc_priv.xa_sb_free, XA_FLAGS_ALLOC);
> + xa_init_flags(&npc_priv.xa_idx2pf_map, XA_FLAGS_ALLOC);
> + xa_init_flags(&npc_priv.xa_pf_map, XA_FLAGS_ALLOC);
> +
> + if (npc_create_srch_order(num_subbanks)) {
> + kfree(npc_priv.sb);
> + return -ENOMEM;
> + }
The four xarrays initialized above are not destroyed before returning
here. Should xa_destroy() be called for xa_sb_used, xa_sb_free,
xa_idx2pf_map, and xa_pf_map on this error path?
[ ... ]
> + npc_priv.xa_pf2idx_map = kcalloc(npc_priv.pf_cnt,
> + sizeof(struct xarray),
> + GFP_KERNEL);
> + if (!npc_priv.xa_pf2idx_map) {
> + kfree(subbank_srch_order);
> + kfree(npc_priv.sb);
> + return -ENOMEM;
> + }
Same question here - the four xarrays are still not destroyed on this
error path.
> +
> + for (i = 0; i < npc_priv.pf_cnt; i++)
> + xa_init_flags(&npc_priv.xa_pf2idx_map[i], XA_FLAGS_ALLOC);
> +
> + return 0;
> +}
--
pw-bot: cr
Powered by blists - more mailing lists