[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Y1J/JzKNOx4lQfG9@chenyu5-mobl1>
Date: Fri, 21 Oct 2022 19:14:47 +0800
From: Chen Yu <yu.c.chen@...el.com>
To: Abel Wu <wuyun.abel@...edance.com>
CC: Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...nel.org>, "Mel Gorman" <mgorman@...e.de>,
Vincent Guittot <vincent.guittot@...aro.org>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Valentin Schneider <valentin.schneider@....com>,
Josh Don <joshdon@...gle.com>,
Tim Chen <tim.c.chen@...ux.intel.com>,
K Prateek Nayak <kprateek.nayak@....com>,
"Gautham R . Shenoy" <gautham.shenoy@....com>,
Aubrey Li <aubrey.li@...el.com>,
Qais Yousef <qais.yousef@....com>,
Juri Lelli <juri.lelli@...hat.com>,
Rik van Riel <riel@...riel.com>,
Yicong Yang <yangyicong@...wei.com>,
Barry Song <21cnbao@...il.com>, <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v6 3/4] sched/fair: Introduce SIS_CORE
On 2022-10-21 at 17:35:06 +0800, Abel Wu wrote:
> On 10/21/22 12:34 PM, Chen Yu wrote:
> > On 2022-10-21 at 12:30:56 +0800, Abel Wu wrote:
> > > Hi Chen, thanks for your reviewing!
> > >
> > > On 10/21/22 12:03 PM, Chen Yu wrote:
> > > > On 2022-10-19 at 20:28:58 +0800, Abel Wu wrote:
> > > > [cut]
> > > > > A major concern is the accuracy of the idle cpumask. A cpu present
> > > > > in the mask might not be idle any more, which is called the false
> > > > > positive cpu. Such cpus will negate lots of benefit this feature
> > > > > brings. The strategy against the false positives will be introduced
> > > > > in next patch.
> > > > >
> > > > I was thinking that, if patch[3/4] needs [4/4] to fix the false positives,
> > > > maybe SIS_CORE could be disabled by default in 3/4 but enabled
> > > > in 4/4? So this might facilicate git bisect in case of any regression
> > > > check?
> > >
> > > Agreed. Will fix in next version.
> > >
> > > > [cut]
> > > > > + * To honor the rule of CORE granule update, set this cpu to the LLC idle
> > > > > + * cpumask only if there is no cpu of this core showed up in the cpumask.
> > > > > + */
> > > > > +static void update_idle_cpu(int cpu)
> > > > > +{
> > > > > + struct sched_domain_shared *sds;
> > > > > +
> > > > > + if (!sched_feat(SIS_CORE))
> > > > > + return;
> > > > > +
> > > > > + sds = rcu_dereference(per_cpu(sd_llc_shared, cpu));
> > > > > + if (sds) {
> > > > > + struct cpumask *icpus = to_cpumask(sds->icpus);
> > > > > +
> > > > > + /*
> > > > > + * This is racy against clearing in select_idle_cpu(),
> > > > > + * and can lead to idle cpus miss the chance to be set to
> > > > > + * the idle cpumask, thus the idle cpus are temporarily
> > > > > + * out of reach in SIS domain scan. But it should be rare
> > > > > + * and we still have ILB to kick them working.
> > > > > + */
> > > > > + if (!cpumask_intersects(cpu_smt_mask(cpu), icpus))
> > > > > + cpumask_set_cpu(cpu, icpus);
> > > > Maybe I miss something, here we only set one CPU in the icpus, but
> > > > when we reach update_idle_cpu(), all SMT siblings of 'cpu' are idle,
> > > > is this intended for 'CORE granule update'?
> > >
> > > The __update_idle_core() is called by all the cpus that need to go idle
> > > to update has_idle_core if necessary, and update_idle_cpu() is called
> > > before that check.
> > >
> > I see.
> >
> > Since __update_idle_core() has checked all SMT siblings of 'cpu' if
> > they are idle, can that information also be updated to icpus?
>
> I think this will simply fallback to the original per-cpu proposal and
> lose the opportunity to spread tasks to different cores.
OK, make sense.
thanks,
Chenyu
Powered by blists - more mailing lists