[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CANRm+CyBP8qd6pYcyX_biGBwOcdjdqMqazNjSnq2H6QNE+OsHw@mail.gmail.com>
Date: Mon, 8 Jul 2019 12:05:44 +0800
From: Wanpeng Li <kernellwp@...il.com>
To: LKML <linux-kernel@...r.kernel.org>
Cc: Ingo Molnar <mingo@...nel.org>,
Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...hat.com>,
Frederic Weisbecker <frederic@...nel.org>,
Thomas Gleixner <tglx@...utronix.de>,
Srikar Dronamraju <srikar@...ux.vnet.ibm.com>
Subject: Re: [PATCH v4 1/2] sched/isolation: Prefer housekeeping cpu in local node
Kindly ping for these two patches, :)
On Fri, 28 Jun 2019 at 16:51, Wanpeng Li <kernellwp@...il.com> wrote:
>
> From: Wanpeng Li <wanpengli@...cent.com>
>
> In real product setup, there will be houseeking cpus in each nodes, it
> is prefer to do housekeeping from local node, fallback to global online
> cpumask if failed to find houseeking cpu from local node.
>
> Reviewed-by: Frederic Weisbecker <frederic@...nel.org>
> Reviewed-by: Srikar Dronamraju <srikar@...ux.vnet.ibm.com>
> Cc: Ingo Molnar <mingo@...hat.com>
> Cc: Peter Zijlstra <peterz@...radead.org>
> Cc: Frederic Weisbecker <frederic@...nel.org>
> Cc: Thomas Gleixner <tglx@...utronix.de>
> Cc: Srikar Dronamraju <srikar@...ux.vnet.ibm.com>
> Signed-off-by: Wanpeng Li <wanpengli@...cent.com>
> ---
> v3 -> v4:
> * have a static function for sched_numa_find_closest
> * cleanup sched_numa_find_closest comments
> v2 -> v3:
> * add sched_numa_find_closest comments
> v1 -> v2:
> * introduce sched_numa_find_closest
>
> kernel/sched/isolation.c | 12 ++++++++++--
> kernel/sched/sched.h | 8 +++++---
> kernel/sched/topology.c | 20 ++++++++++++++++++++
> 3 files changed, 35 insertions(+), 5 deletions(-)
>
> diff --git a/kernel/sched/isolation.c b/kernel/sched/isolation.c
> index 7b9e1e0..191f751 100644
> --- a/kernel/sched/isolation.c
> +++ b/kernel/sched/isolation.c
> @@ -16,9 +16,17 @@ static unsigned int housekeeping_flags;
>
> int housekeeping_any_cpu(enum hk_flags flags)
> {
> - if (static_branch_unlikely(&housekeeping_overridden))
> - if (housekeeping_flags & flags)
> + int cpu;
> +
> + if (static_branch_unlikely(&housekeeping_overridden)) {
> + if (housekeeping_flags & flags) {
> + cpu = sched_numa_find_closest(housekeeping_mask, smp_processor_id());
> + if (cpu < nr_cpu_ids)
> + return cpu;
> +
> return cpumask_any_and(housekeeping_mask, cpu_online_mask);
> + }
> + }
> return smp_processor_id();
> }
> EXPORT_SYMBOL_GPL(housekeeping_any_cpu);
> diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
> index 802b1f3..ec65d90 100644
> --- a/kernel/sched/sched.h
> +++ b/kernel/sched/sched.h
> @@ -1261,16 +1261,18 @@ enum numa_topology_type {
> extern enum numa_topology_type sched_numa_topology_type;
> extern int sched_max_numa_distance;
> extern bool find_numa_distance(int distance);
> -#endif
> -
> -#ifdef CONFIG_NUMA
> extern void sched_init_numa(void);
> extern void sched_domains_numa_masks_set(unsigned int cpu);
> extern void sched_domains_numa_masks_clear(unsigned int cpu);
> +extern int sched_numa_find_closest(const struct cpumask *cpus, int cpu);
> #else
> static inline void sched_init_numa(void) { }
> static inline void sched_domains_numa_masks_set(unsigned int cpu) { }
> static inline void sched_domains_numa_masks_clear(unsigned int cpu) { }
> +static inline int sched_numa_find_closest(const struct cpumask *cpus, int cpu)
> +{
> + return nr_cpu_ids;
> +}
> #endif
>
> #ifdef CONFIG_NUMA_BALANCING
> diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
> index f751ce0..4eea2c9 100644
> --- a/kernel/sched/topology.c
> +++ b/kernel/sched/topology.c
> @@ -1724,6 +1724,26 @@ void sched_domains_numa_masks_clear(unsigned int cpu)
> }
> }
>
> +/*
> + * sched_numa_find_closest() - given the NUMA topology, find the cpu
> + * closest to @cpu from @cpumask.
> + * cpumask: cpumask to find a cpu from
> + * cpu: cpu to be close to
> + *
> + * returns: cpu, or nr_cpu_ids when nothing found.
> + */
> +int sched_numa_find_closest(const struct cpumask *cpus, int cpu)
> +{
> + int i, j = cpu_to_node(cpu);
> +
> + for (i = 0; i < sched_domains_numa_levels; i++) {
> + cpu = cpumask_any_and(cpus, sched_domains_numa_masks[i][j]);
> + if (cpu < nr_cpu_ids)
> + return cpu;
> + }
> + return nr_cpu_ids;
> +}
> +
> #endif /* CONFIG_NUMA */
>
> static int __sdt_alloc(const struct cpumask *cpu_map)
> --
> 2.7.4
>
Powered by blists - more mailing lists