lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aC705Y7wYuz0VBE8@optiplex>
Date: Thu, 22 May 2025 15:26:53 +0530
From: Tanmay Jagdale <tanmay@...vell.com>
To: Simon Horman <horms@...nel.org>
CC: <bbrezillon@...nel.org>, <herbert@...dor.apana.org.au>,
        <davem@...emloft.net>, <sgoutham@...vell.com>, <lcherian@...vell.com>,
        <gakula@...vell.com>, <jerinj@...vell.com>, <hkelam@...vell.com>,
        <sbhatta@...vell.com>, <andrew+netdev@...n.ch>, <edumazet@...gle.com>,
        <kuba@...nel.org>, <pabeni@...hat.com>, <bbhushan2@...vell.com>,
        <bhelgaas@...gle.com>, <pstanner@...hat.com>,
        <gregkh@...uxfoundation.org>, <peterz@...radead.org>,
        <linux@...blig.org>, <krzysztof.kozlowski@...aro.org>,
        <giovanni.cabiddu@...el.com>, <linux-crypto@...r.kernel.org>,
        <linux-kernel@...r.kernel.org>, <netdev@...r.kernel.org>,
        <gcherian@...vell.com>
Subject: Re: [net-next PATCH v1 10/15] octeontx2-pf: ipsec: Setup NIX HW
 resources for inbound flows

Hi Simon,

On 2025-05-07 at 19:16:20, Simon Horman (horms@...nel.org) wrote:
> On Fri, May 02, 2025 at 06:49:51PM +0530, Tanmay Jagdale wrote:
> > A incoming encrypted IPsec packet in the RVU NIX hardware needs
> > to be classified for inline fastpath processing and then assinged
> 
> nit: assigned
> 
>      checkpatch.pl --codespell is your friend
> 
ACK.

> > a RQ and Aura pool before sending to CPT for decryption.
> > 
> > Create a dedicated RQ, Aura and Pool with the following setup
> > specifically for IPsec flows:
> >  - Set ipsech_en, ipsecd_drop_en in RQ context to enable hardware
> >    fastpath processing for IPsec flows.
> >  - Configure the dedicated Aura to raise an interrupt when
> >    it's buffer count drops below a threshold value so that the
> >    buffers can be replenished from the CPU.
> > 
> > The RQ, Aura and Pool contexts are initialized only when esp-hw-offload
> > feature is enabled via ethtool.
> > 
> > Also, move some of the RQ context macro definitions to otx2_common.h
> > so that they can be used in the IPsec driver as well.
> > 
> > Signed-off-by: Tanmay Jagdale <tanmay@...vell.com>
> 
> ...
> 
> > diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.c b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.c
> 
> ...
> 
> > +static int cn10k_ipsec_setup_nix_rx_hw_resources(struct otx2_nic *pfvf)
> > +{
> > +	struct otx2_hw *hw = &pfvf->hw;
> > +	int stack_pages, pool_id;
> > +	struct otx2_pool *pool;
> > +	int err, ptr, num_ptrs;
> > +	dma_addr_t bufptr;
> > +
> > +	num_ptrs = 256;
> > +	pool_id = pfvf->ipsec.inb_ipsec_pool;
> > +	stack_pages = (num_ptrs + hw->stack_pg_ptrs - 1) / hw->stack_pg_ptrs;
> > +
> > +	mutex_lock(&pfvf->mbox.lock);
> > +
> > +	/* Initialize aura context */
> > +	err = cn10k_ipsec_ingress_aura_init(pfvf, pool_id, pool_id, num_ptrs);
> > +	if (err)
> > +		goto fail;
> > +
> > +	/* Initialize pool */
> > +	err = otx2_pool_init(pfvf, pool_id, stack_pages, num_ptrs, pfvf->rbsize, AURA_NIX_RQ);
> > +	if (err)
> 
> This appears to leak pool->fc_addr.
Okay, let me look into this.

> 
> > +		goto fail;
> > +
> > +	/* Flush accumulated messages */
> > +	err = otx2_sync_mbox_msg(&pfvf->mbox);
> > +	if (err)
> > +		goto pool_fail;
> > +
> > +	/* Allocate pointers and free them to aura/pool */
> > +	pool = &pfvf->qset.pool[pool_id];
> > +	for (ptr = 0; ptr < num_ptrs; ptr++) {
> > +		err = otx2_alloc_rbuf(pfvf, pool, &bufptr, pool_id, ptr);
> > +		if (err) {
> > +			err = -ENOMEM;
> > +			goto pool_fail;
> > +		}
> > +		pfvf->hw_ops->aura_freeptr(pfvf, pool_id, bufptr + OTX2_HEAD_ROOM);
> > +	}
> > +
> > +	/* Initialize RQ and map buffers from pool_id */
> > +	err = cn10k_ipsec_ingress_rq_init(pfvf, pfvf->ipsec.inb_ipsec_rq, pool_id);
> > +	if (err)
> > +		goto pool_fail;
> > +
> > +	mutex_unlock(&pfvf->mbox.lock);
> > +	return 0;
> > +
> > +pool_fail:
> > +	mutex_unlock(&pfvf->mbox.lock);
> > +	qmem_free(pfvf->dev, pool->stack);
> > +	qmem_free(pfvf->dev, pool->fc_addr);
> > +	page_pool_destroy(pool->page_pool);
> > +	devm_kfree(pfvf->dev, pool->xdp);
> 
> It is not clear to me why devm_kfree() is being called here.
> I didn't look deeply. But I think it is likely that
> either pool->xdp should be freed when the device is released.
> Or pool->xdp should not be allocated (and freed) using devm functions.
Good catch. We aren't used pool->xdp for inbound IPsec yet, so I'll
drop this.

> 
> > +	pool->xsk_pool = NULL;
> 
> The clean-up of pool->stack, pool->page_pool), pool->xdp, and
> pool->xsk_pool, all seem to unwind initialisation performed by
> otx2_pool_init(). And appear to be duplicated elsewhere.
> I would suggest adding a helper for that.
Okay I'll look into reusing common code.

> 
> > +fail:
> > +	otx2_mbox_reset(&pfvf->mbox.mbox, 0);
> > +	return err;
> > +}
> 
> ...

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ