[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <05472b4ed10c694bce1a2b6dd4a0ef13ea337db3.camel@linux.intel.com>
Date: Thu, 09 Jun 2022 15:28:38 -0700
From: Tim Chen <tim.c.chen@...ux.intel.com>
To: Yicong Yang <yangyicong@...ilicon.com>, peterz@...radead.org,
mingo@...hat.com, juri.lelli@...hat.com,
vincent.guittot@...aro.org, gautham.shenoy@....com,
linux-kernel@...r.kernel.org, linux-arm-kernel@...ts.infradead.org
Cc: dietmar.eggemann@....com, rostedt@...dmis.org, bsegall@...gle.com,
bristot@...hat.com, prime.zeng@...wei.com,
jonathan.cameron@...wei.com, ego@...ux.vnet.ibm.com,
srikar@...ux.vnet.ibm.com, linuxarm@...wei.com, 21cnbao@...il.com,
guodong.xu@...aro.org, hesham.almatary@...wei.com,
john.garry@...wei.com, shenyang39@...wei.com
Subject: Re: [PATCH v4 1/2] sched: Add per_cpu cluster domain info and
cpus_share_resources API
On Thu, 2022-06-09 at 20:06 +0800, Yicong Yang wrote:
>
>
> +/*
> + * Whether CPUs are share cache resources, which means LLC on non-cluster
> + * machines and LLC tag or L2 on machines with clusters.
> + */
> +bool cpus_share_resources(int this_cpu, int that_cpu)
Suggest cpus_share_lowest_cache to be a bit more informative
> +{
> + if (this_cpu == that_cpu)
> + return true;
> +
> + return per_cpu(sd_share_id, this_cpu) == per_cpu(sd_share_id, that_cpu);
> +}
> +
> static inline bool ttwu_queue_cond(int cpu, int wake_flags)
> {
> /*
> diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
> index 01259611beb9..b9bcfcf8d14d 100644
> --- a/kernel/sched/sched.h
> +++ b/kernel/sched/sched.h
> @@ -1753,7 +1753,9 @@ static inline struct sched_domain *lowest_flag_domain(int cpu, int flag)
> DECLARE_PER_CPU(struct sched_domain __rcu *, sd_llc);
> DECLARE_PER_CPU(int, sd_llc_size);
> DECLARE_PER_CPU(int, sd_llc_id);
> +DECLARE_PER_CPU(int, sd_share_id);
> DECLARE_PER_CPU(struct sched_domain_shared __rcu *, sd_llc_shared);
> +DECLARE_PER_CPU(struct sched_domain __rcu *, sd_cluster);
> DECLARE_PER_CPU(struct sched_domain __rcu *, sd_numa);
> DECLARE_PER_CPU(struct sched_domain __rcu *, sd_asym_packing);
> DECLARE_PER_CPU(struct sched_domain __rcu *, sd_asym_cpucapacity);
> diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
> index 05b6c2ad90b9..0595827d481d 100644
> --- a/kernel/sched/topology.c
> +++ b/kernel/sched/topology.c
> @@ -664,6 +664,8 @@ static void destroy_sched_domains(struct sched_domain *sd)
> DEFINE_PER_CPU(struct sched_domain __rcu *, sd_llc);
> DEFINE_PER_CPU(int, sd_llc_size);
> DEFINE_PER_CPU(int, sd_llc_id);
> +DEFINE_PER_CPU(int, sd_share_id);
Some minor nits about the name of "sd_share_id".
It is not quite obvious what it is.
Maybe something like sd_lowest_cache_id to denote
it is the id of lowest shared cache domain between CPU.
Otherwise the patch looks good to me. You can add
Reviewed-by: Tim Chen <tim.c.chen@...ux.intel.com>
Tim
Powered by blists - more mailing lists