[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20220810105119.2684079-1-vschneid@redhat.com>
Date: Wed, 10 Aug 2022 11:51:18 +0100
From: Valentin Schneider <vschneid@...hat.com>
To: netdev@...r.kernel.org, linux-kernel@...r.kernel.org
Cc: Tariq Toukan <ttoukan.linux@...il.com>,
Tariq Toukan <tariqt@...dia.com>,
"David S. Miller" <davem@...emloft.net>,
Saeed Mahameed <saeedm@...dia.com>,
Jakub Kicinski <kuba@...nel.org>,
Ingo Molnar <mingo@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Juri Lelli <juri.lelli@...hat.com>,
Eric Dumazet <edumazet@...gle.com>,
Paolo Abeni <pabeni@...hat.com>, Gal Pressman <gal@...dia.com>,
Vincent Guittot <vincent.guittot@...aro.org>
Subject: [PATCH 1/2] sched/topology: Introduce sched_numa_hop_mask()
Tariq has pointed out that drivers allocating IRQ vectors would benefit
from having smarter NUMA-awareness - cpumask_local_spread() only knows
about the local node and everything outside is in the same bucket.
sched_domains_numa_masks is pretty much what we want to hand out (a cpumask
of CPUs reachable within a given distance budget), introduce
sched_numa_hop_mask() to export those cpumasks. Add in an iteration helper
to iterate over CPUs at an incremental distance from a given node.
Link: http://lore.kernel.org/r/20220728191203.4055-1-tariqt@nvidia.com
Signed-off-by: Valentin Schneider <vschneid@...hat.com>
---
include/linux/topology.h | 12 ++++++++++++
kernel/sched/topology.c | 28 ++++++++++++++++++++++++++++
2 files changed, 40 insertions(+)
diff --git a/include/linux/topology.h b/include/linux/topology.h
index 4564faafd0e1..d66e3cf40823 100644
--- a/include/linux/topology.h
+++ b/include/linux/topology.h
@@ -245,5 +245,17 @@ static inline const struct cpumask *cpu_cpu_mask(int cpu)
return cpumask_of_node(cpu_to_node(cpu));
}
+#ifdef CONFIG_NUMA
+extern const struct cpumask *sched_numa_hop_mask(int node, int hops);
+#else
+static inline const struct cpumask *sched_numa_hop_mask(int node, int hops)
+{
+ return -ENOTSUPP;
+}
+#endif /* CONFIG_NUMA */
+
+#define for_each_numa_hop_mask(node, hops, mask) \
+ for (mask = sched_numa_hop_mask(node, hops); !IS_ERR_OR_NULL(mask); \
+ mask = sched_numa_hop_mask(node, ++hops))
#endif /* _LINUX_TOPOLOGY_H */
diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
index 8739c2a5a54e..f0236a0ae65c 100644
--- a/kernel/sched/topology.c
+++ b/kernel/sched/topology.c
@@ -2067,6 +2067,34 @@ int sched_numa_find_closest(const struct cpumask *cpus, int cpu)
return found;
}
+/**
+ * sched_numa_hop_mask() - Get the cpumask of CPUs at most @hops hops away.
+ * @node: The node to count hops from.
+ * @hops: Include CPUs up to that many hops away. 0 means local node.
+ *
+ * Requires rcu_lock to be held. Returned cpumask is only valid within that
+ * read-side section, copy it if required beyond that.
+ *
+ * Note that not all hops are equal in size; see sched_init_numa() for how
+ * distances and masks are handled.
+ *
+ * Also note that this is a reflection of sched_domains_numa_masks, which may change
+ * during the lifetime of the system (offline nodes are taken out of the masks).
+ */
+const struct cpumask *sched_numa_hop_mask(int node, int hops)
+{
+ struct cpumask ***masks = rcu_dereference(sched_domains_numa_masks);
+
+ if (node >= nr_node_ids || hops >= sched_domains_numa_levels)
+ return ERR_PTR(-EINVAL);
+
+ if (!masks)
+ return NULL;
+
+ return masks[hops][node];
+}
+EXPORT_SYMBOL_GPL(sched_numa_hop_mask);
+
#endif /* CONFIG_NUMA */
static int __sdt_alloc(const struct cpumask *cpu_map)
--
2.31.1
Powered by blists - more mailing lists