[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <f510aabb-6ca3-28c9-fb47-3db3c712db79@intel.com>
Date: Wed, 16 Aug 2023 14:44:18 +0200
From: Alexander Lobakin <aleksander.lobakin@...el.com>
To: Ratheesh Kannoth <rkannoth@...vell.com>
CC: <netdev@...r.kernel.org>, <linux-kernel@...r.kernel.org>,
<sgoutham@...vell.com>, <gakula@...vell.com>, <sbhatta@...vell.com>,
<hkelam@...vell.com>, <davem@...emloft.net>, <edumazet@...gle.com>,
<kuba@...nel.org>, <pabeni@...hat.com>
Subject: Re: [PATCH v2 net] octeontx2-pf: fix page_pool creation fail for
rings > 32k
From: Ratheesh Kannoth <rkannoth@...vell.com>
Date: Wed, 16 Aug 2023 14:37:18 +0530
> octeontx2 driver calls page_pool_create() during driver probe()
> and fails if queue size > 32k. Page pool infra uses these buffers
> as shock absorbers for burst traffic. These pages are pinned down
> over time as working sets varies, due to the recycling nature
> of page pool, given page pool (currently) don't have a shrinker
> mechanism, the pages remain pinned down in ptr_ring.
> Instead of clamping page_pool size to 32k at
> most, limit it even more to 2k to avoid wasting memory.
>
> This have been tested on octeontx2 CN10KA hardware.
> TCP and UDP tests using iperf shows not performance regressions.
>
> Fixes: b2e3406a38f0 ("octeontx2-pf: Add support for page pool")
> Suggested-by: Alexander Lobakin <aleksander.lobakin@...el.com>
> Reviewed-by: Sunil Goutham <sgoutham@...vell.com>
> Signed-off-by: Ratheesh Kannoth <rkannoth@...vell.com>
> ---
>
> ChangeLogs:
>
> vi->v2: Commit message changes and typo fixes
> v0->v1: Commit message changes.
> ---
> drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c | 2 +-
> drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h | 2 ++
> 2 files changed, 3 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c
> index 77c8f650f7ac..fc8a1220eb39 100644
> --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c
> +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c
> @@ -1432,7 +1432,7 @@ int otx2_pool_init(struct otx2_nic *pfvf, u16 pool_id,
> }
>
> pp_params.flags = PP_FLAG_PAGE_FRAG | PP_FLAG_DMA_MAP;
> - pp_params.pool_size = numptrs;
> + pp_params.pool_size = OTX2_PAGE_POOL_SZ;
You still didn't respond to my previous message or maybe I missed the
reply somewhere: why not min(numptrs, OTX2_PAGE_POOL_SZ)? Why create
page_pool with 2k elements for rings with 128 descriptors?
> pp_params.nid = NUMA_NO_NODE;
> pp_params.dev = pfvf->dev;
> pp_params.dma_dir = DMA_FROM_DEVICE;
> diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h
> index ba8091131ec0..f6fea43617ff 100644
> --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h
> +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h
> @@ -30,6 +30,8 @@
> #include <rvu_trace.h>
> #include "qos.h"
>
> +#define OTX2_PAGE_POOL_SZ 2048
> +
> /* IPv4 flag more fragment bit */
> #define IPV4_FLAG_MORE 0x20
>
Thanks,
Olek
Powered by blists - more mailing lists