[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20220728191203.4055-1-tariqt@nvidia.com>
Date: Thu, 28 Jul 2022 22:12:00 +0300
From: Tariq Toukan <tariqt@...dia.com>
To: "David S. Miller" <davem@...emloft.net>,
Saeed Mahameed <saeedm@...dia.com>,
Jakub Kicinski <kuba@...nel.org>,
Ingo Molnar <mingo@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Juri Lelli <juri.lelli@...hat.com>
CC: Eric Dumazet <edumazet@...gle.com>,
Paolo Abeni <pabeni@...hat.com>, <netdev@...r.kernel.org>,
Gal Pressman <gal@...dia.com>,
Vincent Guittot <vincent.guittot@...aro.org>,
<linux-kernel@...r.kernel.org>, Tariq Toukan <tariqt@...dia.com>
Subject: [PATCH net-next V4 0/3] Introduce and use NUMA distance metrics
Hi,
Implement and expose CPU spread API based on the scheduler's
sched_numa_find_closest(). Use it in mlx5 and enic device drivers. This
replaces the binary NUMA preference (local / remote) with an improved one
that minds the actual distances, so that remote NUMAs with short distance
are preferred over farther ones.
This has significant performance implications when using NUMA-aware
memory allocations, improving the throughput and CPU utilization.
Regards,
Tariq
v4:
- memset to zero the cpus array in case !CONFIG_SMP.
v3:
- Introduce the logic as a common API instead of being mlx5 specific.
- Add implementation to enic device driver.
- Use non-atomic version of __cpumask_clear_cpu.
v2:
- Replace EXPORT_SYMBOL with EXPORT_SYMBOL_GPL, per Peter's comment.
- Separate the set_cpu operation into two functions, per Saeed's suggestion.
- Add Saeed's Acked-by signature.
Tariq Toukan (3):
sched/topology: Add NUMA-based CPUs spread API
net/mlx5e: Improve remote NUMA preferences used for the IRQ affinity
hints
enic: Use NUMA distances logic when setting affinity hints
drivers/net/ethernet/cisco/enic/enic_main.c | 10 +++-
drivers/net/ethernet/mellanox/mlx5/core/eq.c | 5 +-
include/linux/sched/topology.h | 5 ++
kernel/sched/topology.c | 49 ++++++++++++++++++++
4 files changed, 65 insertions(+), 4 deletions(-)
--
2.21.0
Powered by blists - more mailing lists