[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20210511165539.GU2633526@linux.vnet.ibm.com>
Date: Tue, 11 May 2021 22:25:39 +0530
From: Srikar Dronamraju <srikar@...ux.vnet.ibm.com>
To: Valentin Schneider <valentin.schneider@....com>
Cc: Ingo Molnar <mingo@...nel.org>,
Peter Zijlstra <peterz@...radead.org>,
LKML <linux-kernel@...r.kernel.org>,
Mel Gorman <mgorman@...hsingularity.net>,
Rik van Riel <riel@...riel.com>,
Thomas Gleixner <tglx@...utronix.de>,
Vincent Guittot <vincent.guittot@...aro.org>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Gautham R Shenoy <ego@...ux.vnet.ibm.com>,
Parth Shah <parth@...ux.ibm.com>,
Aubrey Li <aubrey.li@...el.com>
Subject: Re: [PATCH v2 6/8] sched/idle: Move busy_cpu accounting to idle
callback
* Valentin Schneider <valentin.schneider@....com> [2021-05-11 12:51:41]:
> On 06/05/21 22:15, Srikar Dronamraju wrote:
> > diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
> > index 8db40c8a6ad0..00e4669bb241 100644
> > --- a/kernel/sched/topology.c
> > +++ b/kernel/sched/topology.c
> > @@ -647,6 +647,7 @@ DEFINE_PER_CPU(int, sd_llc_id);
> > #ifdef CONFIG_SCHED_SMT
> > DEFINE_PER_CPU(int, smt_id);
> > #endif
> > +DEFINE_PER_CPU(int, is_idle);
>
> This + patch 8 immediately reminds me of Aubrey's patch:
>
> http://lore.kernel.org/r/1615872606-56087-1-git-send-email-aubrey.li@intel.com
>
> last I looked it seemed OK, even the test bot seems happy. Aubrey, did you
> have any more work to do on that one (other than rebasing)?
>
The above patch also is aimed at similar problems.
However I feel this patch has few differences.
- We use the nr_busy_cpus in this patchset in the wake_affine_idler_llc() to
differentiate between 2 LLCs and choose a LLC that is lightly loaded.
- Except for the per-cpu is_idle, it gets it done with the existing
infrastructure. (And we could move the is_idle in the rq struct too.)
- Mel had reservations on per-LLC idlecore, I would think the per-LLC idle
mask would mean more updates and dirty. Everytime a CPU goes to idle,
comes out of idle, the mask would be dirtied. Though the number of times
this new per LLC CPU mask is read is probably going to be lesser than
idle-cores with this patch series.
- In the said implementation, the idleness of a CPU is done at every check
which may not be necessary if handled in the callbacks.
> > DEFINE_PER_CPU(struct sched_domain_shared __rcu *, sd_llc_shared);
> > DEFINE_PER_CPU(struct sched_domain __rcu *, sd_numa);
> > DEFINE_PER_CPU(struct sched_domain __rcu *, sd_asym_packing);
> > @@ -673,6 +674,7 @@ static void update_top_cache_domain(int cpu)
> > #ifdef CONFIG_SCHED_SMT
> > per_cpu(smt_id, cpu) = cpumask_first(cpu_smt_mask(cpu));
> > #endif
> > + per_cpu(is_idle, cpu) = 1;
> > rcu_assign_pointer(per_cpu(sd_llc_shared, cpu), sds);
> >
> > sd = lowest_flag_domain(cpu, SD_NUMA);
> > --
> > 2.18.2
--
Thanks and Regards
Srikar Dronamraju
Powered by blists - more mailing lists