[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250626154029.22cd5d2d@kernel.org>
Date: Thu, 26 Jun 2025 15:40:29 -0700
From: Jakub Kicinski <kuba@...nel.org>
To: Michael Chan <michael.chan@...adcom.com>
Cc: davem@...emloft.net, netdev@...r.kernel.org, edumazet@...gle.com,
pabeni@...hat.com, andrew+netdev@...n.ch, horms@...nel.org,
pavan.chebbi@...adcom.com
Subject: Re: [PATCH net-next] eth: bnxt: take page size into account for
page pool recycling rings
On Thu, 26 Jun 2025 14:52:17 -0700 Michael Chan wrote:
> > {
> > + const unsigned int agg_size_fac = PAGE_SIZE / BNXT_RX_PAGE_SIZE;
> > + const unsigned int rx_size_fac = PAGE_SIZE / SZ_4K;
> > struct page_pool_params pp = { 0 };
> > struct page_pool *pool;
> >
> > - pp.pool_size = bp->rx_agg_ring_size;
> > + pp.pool_size = bp->rx_agg_ring_size / agg_size_fac;
>
> The bp->rx_agg_ring_size has already taken the system PAGE_SIZE into
> consideration to some extent in bnxt_set_ring_params(). The
> jumbo_factor and agg_factor will be smaller when PAGE_SIZE is larger.
> Will this overcompensate?
My understanding is basically that bnxt_set_ring_params() operates
on BNXT_RX_PAGE_SIZE so it takes care of 4k .. 32k range pretty well.
But for 64k pages we will use 32k buffers, so 2 agg ring entries
per system page. If our heuristic is that we want the same number
of pages on the device ring as in the pp cache we should divide
the cache size by two. Hope that makes sense.
My initial temptation was to say that agg ring is always shown to
the user in 4kB units, regardless of system page. The driver would
divide and multiply the parameter in the ethtool callbacks. Otherwise
even with this patch existing configs for bnxt have to be adjusted
based on system page size :( But I suspect you may have existing users
on systems with 64kB pages, so this would be too risky? WDYT?
Powered by blists - more mailing lists