[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <110797fb-fea6-cb4d-af3c-4665e8246479@kernel.org>
Date: Tue, 15 Aug 2023 10:41:52 +0200
From: Jesper Dangaard Brouer <hawk@...nel.org>
To: Ratheesh Kannoth <rkannoth@...vell.com>, netdev@...r.kernel.org,
linux-kernel@...r.kernel.org
Cc: kuba@...nel.org, pabeni@...hat.cm, edumazet@...gle.com,
aleksander.lobakin@...el.com, hawk@...nel.org, sgoutham@...vell.com,
gakula@...vell.com, sbhatta@...vell.com, hkelam@...vell.com
Subject: Re: [PATCH v1 net] octeontx2-pf: fix page_pool creation fail for
rings > 32k
On 14/08/2023 15.24, Ratheesh Kannoth wrote:
> octeontx2 driver calls page_pool_create() during driver probe()
> and fails if queue size > 32k. Page pool infra uses these buffers
> as shock absorbers for burst traffic. These pages are pinned
> down as soon as page pool is created.
It isn't true that "pages are pinned down as soon as page pool is created".
We need to improve this commit text.
My suggestion:
These pages are pinned down over time as working sets varies, due to
the recycling nature of page pool, given page pool (currently) don't
have a shrinker mechanism, the pages remain pinned down in ptr_ring.
> As page pool does direct
> recycling way more aggressivelyi, often times ptr_ring is left
^
Typo
(my suggestion already covers recycling)
> unused at all. Instead of clamping page_pool size to 32k at
> most, limit it even more to 2k to avoid wasting memory on much
> less used ptr_ring.
I would adjust and delete "much less used".
I assume you have the octeontx2 hardware available (which I don't).
Can you test that this adjustment to 2k doesn't cause a performance
regression on your hardware?
And then produce a statement in the commit desc like:
This have been tested on octeontx2 hardware (devel board xyz).
TCP and UDP tests using netperf shows not performance regressions.
2K with page_size 4KiB is around 8MiB if PP gets full.
It would be convincing if commit message said e.g. PP pool_size 2k can
maximum pin down 8MiB per RX-queue (assuming page size 4K), but that is
okay as systems using octeontx2 hardware often have many GB of memory.
>
> Fixes: b2e3406a38f0 ("octeontx2-pf: Add support for page pool")
> Suggested-by: Alexander Lobakin <aleksander.lobakin@...el.com>
> Signed-off-by: Ratheesh Kannoth <rkannoth@...vell.com>
>
> ---
>
> ChangeLogs:
>
> v0->v1: Commit message changes.
> ---
> ---
> drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c | 2 +-
> drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h | 2 ++
> 2 files changed, 3 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c
> index 77c8f650f7ac..fc8a1220eb39 100644
> --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c
> +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c
> @@ -1432,7 +1432,7 @@ int otx2_pool_init(struct otx2_nic *pfvf, u16 pool_id,
> }
>
> pp_params.flags = PP_FLAG_PAGE_FRAG | PP_FLAG_DMA_MAP;
> - pp_params.pool_size = numptrs;
> + pp_params.pool_size = OTX2_PAGE_POOL_SZ;
> pp_params.nid = NUMA_NO_NODE;
> pp_params.dev = pfvf->dev;
> pp_params.dma_dir = DMA_FROM_DEVICE;
> diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h
> index ba8091131ec0..f6fea43617ff 100644
> --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h
> +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h
> @@ -30,6 +30,8 @@
> #include <rvu_trace.h>
> #include "qos.h"
>
> +#define OTX2_PAGE_POOL_SZ 2048
> +
> /* IPv4 flag more fragment bit */
> #define IPV4_FLAG_MORE 0x20
>
Powered by blists - more mailing lists