[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <e9f8466e-d138-3cc6-11c6-2d62f4c9dc4a@intel.com>
Date: Mon, 25 Sep 2023 15:46:25 -0700
From: Jacob Keller <jacob.e.keller@...el.com>
To: Yury Norov <yury.norov@...il.com>, <linux-kernel@...r.kernel.org>,
<netdev@...r.kernel.org>, <linux-rdma@...r.kernel.org>
CC: Tariq Toukan <ttoukan.linux@...il.com>,
Valentin Schneider <vschneid@...hat.com>,
Maher Sanalla <msanalla@...dia.com>,
Ingo Molnar <mingo@...nel.org>, Mel Gorman <mgorman@...e.de>,
Saeed Mahameed <saeedm@...dia.com>,
Leon Romanovsky <leon@...nel.org>,
"David S. Miller" <davem@...emloft.net>,
Eric Dumazet <edumazet@...gle.com>,
Jakub Kicinski <kuba@...nel.org>,
Paolo Abeni <pabeni@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Juri Lelli <juri.lelli@...hat.com>,
Vincent Guittot <vincent.guittot@...aro.org>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Steven Rostedt <rostedt@...dmis.org>,
Ben Segall <bsegall@...gle.com>,
Daniel Bristot de Oliveira <bristot@...hat.com>,
Pawel Chmielewski <pawel.chmielewski@...el.com>,
Yury Norov <ynorov@...dia.com>
Subject: Re: [PATCH 3/4] Revert "sched/topology: Introduce
sched_numa_hop_mask()"
On 9/24/2023 7:05 PM, Yury Norov wrote:
> This reverts commit 9feae65845f7b16376716fe70b7d4b9bf8721848.
>
> Now that for_each_numa_hop_mask() is reverted, revert underlying
> machinery.
>
> Signed-off-by: Yury Norov <yury.norov@...il.com>
> Signed-off-by: Yury Norov <ynorov@...dia.com>
> ---
> include/linux/topology.h | 7 -------
> kernel/sched/topology.c | 33 ---------------------------------
> 2 files changed, 40 deletions(-)
>
Reviewed-by: Jacob Keller <jacob.e.keller@...el.com>
> diff --git a/include/linux/topology.h b/include/linux/topology.h
> index 344c2362755a..72f264575698 100644
> --- a/include/linux/topology.h
> +++ b/include/linux/topology.h
> @@ -247,18 +247,11 @@ static inline const struct cpumask *cpu_cpu_mask(int cpu)
>
> #ifdef CONFIG_NUMA
> int sched_numa_find_nth_cpu(const struct cpumask *cpus, int cpu, int node);
> -extern const struct cpumask *sched_numa_hop_mask(unsigned int node, unsigned int hops);
> #else
> static __always_inline int sched_numa_find_nth_cpu(const struct cpumask *cpus, int cpu, int node)
> {
> return cpumask_nth(cpu, cpus);
> }
> -
> -static inline const struct cpumask *
> -sched_numa_hop_mask(unsigned int node, unsigned int hops)
> -{
> - return ERR_PTR(-EOPNOTSUPP);
> -}
> #endif /* CONFIG_NUMA */
>
> #endif /* _LINUX_TOPOLOGY_H */
> diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
> index 05a5bc678c08..3f1c09a9ef6d 100644
> --- a/kernel/sched/topology.c
> +++ b/kernel/sched/topology.c
> @@ -2143,39 +2143,6 @@ int sched_numa_find_nth_cpu(const struct cpumask *cpus, int cpu, int node)
> return ret;
> }
> EXPORT_SYMBOL_GPL(sched_numa_find_nth_cpu);
> -
> -/**
> - * sched_numa_hop_mask() - Get the cpumask of CPUs at most @hops hops away from
> - * @node
> - * @node: The node to count hops from.
> - * @hops: Include CPUs up to that many hops away. 0 means local node.
> - *
> - * Return: On success, a pointer to a cpumask of CPUs at most @hops away from
> - * @node, an error value otherwise.
> - *
> - * Requires rcu_lock to be held. Returned cpumask is only valid within that
> - * read-side section, copy it if required beyond that.
> - *
> - * Note that not all hops are equal in distance; see sched_init_numa() for how
> - * distances and masks are handled.
> - * Also note that this is a reflection of sched_domains_numa_masks, which may change
> - * during the lifetime of the system (offline nodes are taken out of the masks).
> - */
> -const struct cpumask *sched_numa_hop_mask(unsigned int node, unsigned int hops)
> -{
> - struct cpumask ***masks;
> -
> - if (node >= nr_node_ids || hops >= sched_domains_numa_levels)
> - return ERR_PTR(-EINVAL);
> -
> - masks = rcu_dereference(sched_domains_numa_masks);
> - if (!masks)
> - return ERR_PTR(-EBUSY);
> -
> - return masks[hops][node];
> -}
> -EXPORT_SYMBOL_GPL(sched_numa_hop_mask);
> -
> #endif /* CONFIG_NUMA */
>
> static int __sdt_alloc(const struct cpumask *cpu_map)
Powered by blists - more mailing lists