[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250211110635.16a43562@kernel.org>
Date: Tue, 11 Feb 2025 11:06:35 -0800
From: Jakub Kicinski <kuba@...nel.org>
To: Tariq Toukan <ttoukan.linux@...il.com>
Cc: davem@...emloft.net, netdev@...r.kernel.org, edumazet@...gle.com,
pabeni@...hat.com, andrew+netdev@...n.ch, horms@...nel.org,
tariqt@...dia.com, hawk@...nel.org
Subject: Re: [PATCH net-next 1/4] eth: mlx4: create a page pool for Rx
On Tue, 11 Feb 2025 20:01:08 +0200 Tariq Toukan wrote:
> > The pool_size is just the size of the cache, how many unallocated
> > DMA mapped pages we can keep around before freeing them to system
> > memory. It has no implications for correctness.
>
> Right, it doesn't hurt correctness.
> But, we better have the cache size derived from the overall ring buffer
> size, so that the memory consumption/footprint reflects the user
> configuration.
>
> Something like:
>
> ring->size * (priv->frag_info[i].frag_stride for i < num_frags).
>
> or roughly ring->size * MLX4_EN_EFF_MTU(dev->mtu).
These calculations appear to produce byte count?
The ring size is in *pages*. Frag is also somewhat irrelevant, given
that we're talking about full pages here, not 2k frags. So I think
I'll go with:
pp.pool_size =
size * DIV_ROUND_UP(MLX4_EN_EFF_MTU(dev->mtu), PAGE_SIZE);
Powered by blists - more mailing lists