[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250507134620.GE3339421@horms.kernel.org>
Date: Wed, 7 May 2025 14:46:20 +0100
From: Simon Horman <horms@...nel.org>
To: Tanmay Jagdale <tanmay@...vell.com>
Cc: bbrezillon@...nel.org, arno@...isbad.org, schalla@...vell.com,
herbert@...dor.apana.org.au, davem@...emloft.net,
sgoutham@...vell.com, lcherian@...vell.com, gakula@...vell.com,
jerinj@...vell.com, hkelam@...vell.com, sbhatta@...vell.com,
andrew+netdev@...n.ch, edumazet@...gle.com, kuba@...nel.org,
pabeni@...hat.com, bbhushan2@...vell.com, bhelgaas@...gle.com,
pstanner@...hat.com, gregkh@...uxfoundation.org,
peterz@...radead.org, linux@...blig.org,
krzysztof.kozlowski@...aro.org, giovanni.cabiddu@...el.com,
linux-crypto@...r.kernel.org, linux-kernel@...r.kernel.org,
netdev@...r.kernel.org, rkannoth@...vell.com, sumang@...vell.com,
gcherian@...vell.com
Subject: Re: [net-next PATCH v1 10/15] octeontx2-pf: ipsec: Setup NIX HW
resources for inbound flows
On Fri, May 02, 2025 at 06:49:51PM +0530, Tanmay Jagdale wrote:
> A incoming encrypted IPsec packet in the RVU NIX hardware needs
> to be classified for inline fastpath processing and then assinged
nit: assigned
checkpatch.pl --codespell is your friend
> a RQ and Aura pool before sending to CPT for decryption.
>
> Create a dedicated RQ, Aura and Pool with the following setup
> specifically for IPsec flows:
> - Set ipsech_en, ipsecd_drop_en in RQ context to enable hardware
> fastpath processing for IPsec flows.
> - Configure the dedicated Aura to raise an interrupt when
> it's buffer count drops below a threshold value so that the
> buffers can be replenished from the CPU.
>
> The RQ, Aura and Pool contexts are initialized only when esp-hw-offload
> feature is enabled via ethtool.
>
> Also, move some of the RQ context macro definitions to otx2_common.h
> so that they can be used in the IPsec driver as well.
>
> Signed-off-by: Tanmay Jagdale <tanmay@...vell.com>
...
> diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.c b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.c
...
> +static int cn10k_ipsec_setup_nix_rx_hw_resources(struct otx2_nic *pfvf)
> +{
> + struct otx2_hw *hw = &pfvf->hw;
> + int stack_pages, pool_id;
> + struct otx2_pool *pool;
> + int err, ptr, num_ptrs;
> + dma_addr_t bufptr;
> +
> + num_ptrs = 256;
> + pool_id = pfvf->ipsec.inb_ipsec_pool;
> + stack_pages = (num_ptrs + hw->stack_pg_ptrs - 1) / hw->stack_pg_ptrs;
> +
> + mutex_lock(&pfvf->mbox.lock);
> +
> + /* Initialize aura context */
> + err = cn10k_ipsec_ingress_aura_init(pfvf, pool_id, pool_id, num_ptrs);
> + if (err)
> + goto fail;
> +
> + /* Initialize pool */
> + err = otx2_pool_init(pfvf, pool_id, stack_pages, num_ptrs, pfvf->rbsize, AURA_NIX_RQ);
> + if (err)
This appears to leak pool->fc_addr.
> + goto fail;
> +
> + /* Flush accumulated messages */
> + err = otx2_sync_mbox_msg(&pfvf->mbox);
> + if (err)
> + goto pool_fail;
> +
> + /* Allocate pointers and free them to aura/pool */
> + pool = &pfvf->qset.pool[pool_id];
> + for (ptr = 0; ptr < num_ptrs; ptr++) {
> + err = otx2_alloc_rbuf(pfvf, pool, &bufptr, pool_id, ptr);
> + if (err) {
> + err = -ENOMEM;
> + goto pool_fail;
> + }
> + pfvf->hw_ops->aura_freeptr(pfvf, pool_id, bufptr + OTX2_HEAD_ROOM);
> + }
> +
> + /* Initialize RQ and map buffers from pool_id */
> + err = cn10k_ipsec_ingress_rq_init(pfvf, pfvf->ipsec.inb_ipsec_rq, pool_id);
> + if (err)
> + goto pool_fail;
> +
> + mutex_unlock(&pfvf->mbox.lock);
> + return 0;
> +
> +pool_fail:
> + mutex_unlock(&pfvf->mbox.lock);
> + qmem_free(pfvf->dev, pool->stack);
> + qmem_free(pfvf->dev, pool->fc_addr);
> + page_pool_destroy(pool->page_pool);
> + devm_kfree(pfvf->dev, pool->xdp);
It is not clear to me why devm_kfree() is being called here.
I didn't look deeply. But I think it is likely that
either pool->xdp should be freed when the device is released.
Or pool->xdp should not be allocated (and freed) using devm functions.
> + pool->xsk_pool = NULL;
The clean-up of pool->stack, pool->page_pool), pool->xdp, and
pool->xsk_pool, all seem to unwind initialisation performed by
otx2_pool_init(). And appear to be duplicated elsewhere.
I would suggest adding a helper for that.
> +fail:
> + otx2_mbox_reset(&pfvf->mbox.mbox, 0);
> + return err;
> +}
...
Powered by blists - more mailing lists