[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <03aaf512-3ac5-fdfe-da2d-3fecd24591e2@gmail.com>
Date: Wed, 10 Aug 2022 15:57:54 +0300
From: Tariq Toukan <ttoukan.linux@...il.com>
To: Valentin Schneider <vschneid@...hat.com>, netdev@...r.kernel.org,
linux-kernel@...r.kernel.org
Cc: Tariq Toukan <tariqt@...dia.com>,
"David S. Miller" <davem@...emloft.net>,
Saeed Mahameed <saeedm@...dia.com>,
Jakub Kicinski <kuba@...nel.org>,
Ingo Molnar <mingo@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Juri Lelli <juri.lelli@...hat.com>,
Eric Dumazet <edumazet@...gle.com>,
Paolo Abeni <pabeni@...hat.com>, Gal Pressman <gal@...dia.com>,
Vincent Guittot <vincent.guittot@...aro.org>
Subject: Re: [PATCH 1/2] sched/topology: Introduce sched_numa_hop_mask()
On 8/10/2022 3:42 PM, Tariq Toukan wrote:
>
>
> On 8/10/2022 1:51 PM, Valentin Schneider wrote:
>> Tariq has pointed out that drivers allocating IRQ vectors would benefit
>> from having smarter NUMA-awareness - cpumask_local_spread() only knows
>> about the local node and everything outside is in the same bucket.
>>
>> sched_domains_numa_masks is pretty much what we want to hand out (a
>> cpumask
>> of CPUs reachable within a given distance budget), introduce
>> sched_numa_hop_mask() to export those cpumasks. Add in an iteration
>> helper
>> to iterate over CPUs at an incremental distance from a given node.
>>
>> Link: http://lore.kernel.org/r/20220728191203.4055-1-tariqt@nvidia.com
>> Signed-off-by: Valentin Schneider <vschneid@...hat.com>
>> ---
>> include/linux/topology.h | 12 ++++++++++++
>> kernel/sched/topology.c | 28 ++++++++++++++++++++++++++++
>> 2 files changed, 40 insertions(+)
>>
>> diff --git a/include/linux/topology.h b/include/linux/topology.h
>> index 4564faafd0e1..d66e3cf40823 100644
>> --- a/include/linux/topology.h
>> +++ b/include/linux/topology.h
>> @@ -245,5 +245,17 @@ static inline const struct cpumask
>> *cpu_cpu_mask(int cpu)
>> return cpumask_of_node(cpu_to_node(cpu));
>> }
>> +#ifdef CONFIG_NUMA
>> +extern const struct cpumask *sched_numa_hop_mask(int node, int hops);
>> +#else
>> +static inline const struct cpumask *sched_numa_hop_mask(int node, int
>> hops)
>> +{
>> + return -ENOTSUPP;
>
> missing ERR_PTR()
>
>> +}
>> +#endif /* CONFIG_NUMA */
>> +
>> +#define for_each_numa_hop_mask(node, hops, mask) \
>> + for (mask = sched_numa_hop_mask(node, hops);
>> !IS_ERR_OR_NULL(mask); \
>> + mask = sched_numa_hop_mask(node, ++hops))
>> #endif /* _LINUX_TOPOLOGY_H */
>> diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
>> index 8739c2a5a54e..f0236a0ae65c 100644
>> --- a/kernel/sched/topology.c
>> +++ b/kernel/sched/topology.c
>> @@ -2067,6 +2067,34 @@ int sched_numa_find_closest(const struct
>> cpumask *cpus, int cpu)
>> return found;
>> }
>> +/**
>> + * sched_numa_hop_mask() - Get the cpumask of CPUs at most @hops hops
>> away.
>> + * @node: The node to count hops from.
>> + * @hops: Include CPUs up to that many hops away. 0 means local node.
AFAIU, here you work with a specific level/num of hops, description is
not accurate.
>> + *
>> + * Requires rcu_lock to be held. Returned cpumask is only valid
>> within that
>> + * read-side section, copy it if required beyond that.
>> + *
>> + * Note that not all hops are equal in size; see sched_init_numa()
>> for how
>> + * distances and masks are handled.
>> + *
>> + * Also note that this is a reflection of sched_domains_numa_masks,
>> which may change
>> + * during the lifetime of the system (offline nodes are taken out of
>> the masks).
>> + */
>> +const struct cpumask *sched_numa_hop_mask(int node, int hops)
>> +{
>> + struct cpumask ***masks = rcu_dereference(sched_domains_numa_masks);
>> +
>> + if (node >= nr_node_ids || hops >= sched_domains_numa_levels)
>> + return ERR_PTR(-EINVAL);
>> +
>> + if (!masks)
>> + return NULL;
>> +
>> + return masks[hops][node];
>> +}
>> +EXPORT_SYMBOL_GPL(sched_numa_hop_mask);
>> +
>> #endif /* CONFIG_NUMA */
>> static int __sdt_alloc(const struct cpumask *cpu_map)
Powered by blists - more mailing lists