[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <18dc77ac-5671-43ed-ac88-1c145bc37a00@gmail.com>
Date: Tue, 11 Feb 2025 20:01:08 +0200
From: Tariq Toukan <ttoukan.linux@...il.com>
To: Jakub Kicinski <kuba@...nel.org>
Cc: davem@...emloft.net, netdev@...r.kernel.org, edumazet@...gle.com,
pabeni@...hat.com, andrew+netdev@...n.ch, horms@...nel.org,
tariqt@...dia.com, hawk@...nel.org
Subject: Re: [PATCH net-next 1/4] eth: mlx4: create a page pool for Rx
On 07/02/2025 1:04, Jakub Kicinski wrote:
> On Thu, 6 Feb 2025 21:44:38 +0200 Tariq Toukan wrote:
>>> - if (xdp_rxq_info_reg(&ring->xdp_rxq, priv->dev, queue_index, 0) < 0)
>>> + pp.flags = PP_FLAG_DMA_MAP;
>>> + pp.pool_size = MLX4_EN_MAX_RX_SIZE;
>>
>> Pool size is not accurate.
>> From one side, MLX4_EN_MAX_RX_SIZE might be too big compared to the
>> actual size.
>>
>> However, more importantly, it can be too small when working with large
>> MTU. This is mutually exclusive with XDP in mlx4.
>>
>> Rx ring entries consist of 'frags', each entry needs between 1 to 4
>> (MLX4_EN_MAX_RX_FRAGS) frags. In default MTU, each page shared between
>> two entries.
>
> The pool_size is just the size of the cache, how many unallocated
> DMA mapped pages we can keep around before freeing them to system
> memory. It has no implications for correctness.
Right, it doesn't hurt correctness.
But, we better have the cache size derived from the overall ring buffer
size, so that the memory consumption/footprint reflects the user
configuration.
Something like:
ring->size * (priv->frag_info[i].frag_stride for i < num_frags).
or roughly ring->size * MLX4_EN_EFF_MTU(dev->mtu).
Powered by blists - more mailing lists