[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YtUzu4d9F+V621tw@worktop.programming.kicks-ass.net>
Date: Mon, 18 Jul 2022 12:19:39 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Tariq Toukan <tariqt@...dia.com>
Cc: "David S. Miller" <davem@...emloft.net>,
Saeed Mahameed <saeedm@...dia.com>,
Jakub Kicinski <kuba@...nel.org>,
Ingo Molnar <mingo@...hat.com>,
Juri Lelli <juri.lelli@...hat.com>,
Eric Dumazet <edumazet@...gle.com>,
Paolo Abeni <pabeni@...hat.com>, netdev@...r.kernel.org,
Gal Pressman <gal@...dia.com>,
Vincent Guittot <vincent.guittot@...aro.org>,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH net-next 1/2] sched/topology: Expose
sched_numa_find_closest
On Sun, Jul 17, 2022 at 08:23:00AM +0300, Tariq Toukan wrote:
> This logic can help device drivers prefer some remote cpus
> over others, according to the NUMA distance metrics.
>
> Reviewed-by: Gal Pressman <gal@...dia.com>
> Signed-off-by: Tariq Toukan <tariqt@...dia.com>
> ---
> include/linux/sched/topology.h | 2 ++
> kernel/sched/topology.c | 1 +
> 2 files changed, 3 insertions(+)
>
> diff --git a/include/linux/sched/topology.h b/include/linux/sched/topology.h
> index 56cffe42abbc..d467c30bdbb9 100644
> --- a/include/linux/sched/topology.h
> +++ b/include/linux/sched/topology.h
> @@ -61,6 +61,8 @@ static inline int cpu_numa_flags(void)
> {
> return SD_NUMA;
> }
> +
> +int sched_numa_find_closest(const struct cpumask *cpus, int cpu);
> #endif
>
> extern int arch_asym_cpu_priority(int cpu);
> diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
> index 05b6c2ad90b9..688334ac4980 100644
> --- a/kernel/sched/topology.c
> +++ b/kernel/sched/topology.c
> @@ -2066,6 +2066,7 @@ int sched_numa_find_closest(const struct cpumask *cpus, int cpu)
>
> return found;
> }
> +EXPORT_SYMBOL(sched_numa_find_closest);
EXPORT_SYMBOL_GPL() if anything.
Also, this thing will be subject to sched_domains, that means that if
someone uses cpusets or other means to partition the machine, that
effects the result.
Is that what you want?
Powered by blists - more mailing lists