[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <524cf976-a734-4d30-915b-2480a6139e27@nvidia.com>
Date: Sun, 15 Jun 2025 08:55:20 +0300
From: Moshe Shemesh <moshe@...dia.com>
To: Zhu Yanjun <yanjun.zhu@...ux.dev>, Mark Bloch <mbloch@...dia.com>, "David
S. Miller" <davem@...emloft.net>, Jakub Kicinski <kuba@...nel.org>, "Paolo
Abeni" <pabeni@...hat.com>, Eric Dumazet <edumazet@...gle.com>, Andrew Lunn
<andrew+netdev@...n.ch>, Simon Horman <horms@...nel.org>
CC: <saeedm@...dia.com>, <gal@...dia.com>, <leonro@...dia.com>,
<tariqt@...dia.com>, Leon Romanovsky <leon@...nel.org>,
<netdev@...r.kernel.org>, <linux-rdma@...r.kernel.org>,
<linux-kernel@...r.kernel.org>
Subject: Re: [PATCH net 1/9] net/mlx5: Ensure fw pages are always allocated on
same NUMA
On 6/13/2025 7:22 PM, Zhu Yanjun wrote:
> 在 2025/6/10 8:15, Mark Bloch 写道:
>> From: Moshe Shemesh <moshe@...dia.com>
>>
>> When firmware asks the driver to allocate more pages, using event of
>> give_pages, the driver should always allocate it from same NUMA, the
>> original device NUMA. Current code uses dev_to_node() which can result
>> in different NUMA as it is changed by other driver flows, such as
>> mlx5_dma_zalloc_coherent_node(). Instead, use saved numa node for
>> allocating firmware pages.
>
> I'm not sure whether NUMA balancing is currently being considered or not.
>
> If I understand correctly, after this commit is applied, all pages will
> be allocated from the same NUMA node — specifically, the original
> device's NUMA node. This seems like it could lead to NUMA imbalance.
The change is applied only on pages allocated for FW use. Pages which
are allocated for driver use as SQ/RQ/CQ/EQ etc, are not affected by
this change.
As for FW pages (allocated for FW use), we did mean to use only the
device close NUMA, we are not looking for balance here. Even before the
change, in most cases, FW pages are allocated from device close NUMA,
the fix only ensures it.
>
> By using dev_to_node, it appears that pages could be allocated from
> other NUMA nodes, which might help maintain better NUMA balance.
>
> In the past, I encountered a NUMA balancing issue caused by the mlx5
> NIC, so using dev_to_node might be beneficial in addressing similar
> problems.
>
> Thanks,
> Zhu Yanjun
>
>>
>> Fixes: 311c7c71c9bb ("net/mlx5e: Allocate DMA coherent memory on
>> reader NUMA node")
>> Signed-off-by: Moshe Shemesh <moshe@...dia.com>
>> Reviewed-by: Tariq Toukan <tariqt@...dia.com>
>> Signed-off-by: Mark Bloch <mbloch@...dia.com>
>> ---
>> drivers/net/ethernet/mellanox/mlx5/core/pagealloc.c | 2 +-
>> 1 file changed, 1 insertion(+), 1 deletion(-)
>>
>> diff --git a/drivers/net/ethernet/mellanox/mlx5/core/pagealloc.c b/
>> drivers/net/ethernet/mellanox/mlx5/core/pagealloc.c
>> index 972e8e9df585..9bc9bd83c232 100644
>> --- a/drivers/net/ethernet/mellanox/mlx5/core/pagealloc.c
>> +++ b/drivers/net/ethernet/mellanox/mlx5/core/pagealloc.c
>> @@ -291,7 +291,7 @@ static void free_4k(struct mlx5_core_dev *dev, u64
>> addr, u32 function)
>> static int alloc_system_page(struct mlx5_core_dev *dev, u32 function)
>> {
>> struct device *device = mlx5_core_dma_dev(dev);
>> - int nid = dev_to_node(device);
>> + int nid = dev->priv.numa_node;
>> struct page *page;
>> u64 zero_addr = 1;
>> u64 addr;
>
Powered by blists - more mailing lists