[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <587688ee-2e81-49f5-a1a2-4198c14ac184@gmail.com>
Date: Tue, 11 Feb 2025 21:21:13 +0200
From: Tariq Toukan <ttoukan.linux@...il.com>
To: Jakub Kicinski <kuba@...nel.org>
Cc: davem@...emloft.net, netdev@...r.kernel.org, edumazet@...gle.com,
pabeni@...hat.com, andrew+netdev@...n.ch, horms@...nel.org,
tariqt@...dia.com, hawk@...nel.org
Subject: Re: [PATCH net-next 1/4] eth: mlx4: create a page pool for Rx
On 11/02/2025 21:11, Tariq Toukan wrote:
>
>
> On 11/02/2025 21:06, Jakub Kicinski wrote:
>> On Tue, 11 Feb 2025 20:01:08 +0200 Tariq Toukan wrote:
>>>> The pool_size is just the size of the cache, how many unallocated
>>>> DMA mapped pages we can keep around before freeing them to system
>>>> memory. It has no implications for correctness.
>>>
>>> Right, it doesn't hurt correctness.
>>> But, we better have the cache size derived from the overall ring buffer
>>> size, so that the memory consumption/footprint reflects the user
>>> configuration.
>>>
>>> Something like:
>>>
>>> ring->size * (priv->frag_info[i].frag_stride for i < num_frags).
>>>
>>> or roughly ring->size * MLX4_EN_EFF_MTU(dev->mtu).
>>
>> These calculations appear to produce byte count?
>
> Yes.
> Of course, need to align and translate to page size.
>
>> The ring size is in *pages*. Frag is also somewhat irrelevant, given
>> that we're talking about full pages here, not 2k frags. So I think
>> I'll go with:
>>
>> pp.pool_size =
>> size * DIV_ROUND_UP(MLX4_EN_EFF_MTU(dev->mtu), PAGE_SIZE);
>
Can use priv->rx_skb_size as well, it hosts the eff mtu value.
>
> LGTM.
Powered by blists - more mailing lists