[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Y+XqfXVHyqrv1/Ae@chenyu5-mobl1>
Date: Fri, 10 Feb 2023 14:55:57 +0800
From: Chen Yu <yu.c.chen@...el.com>
To: Tim Chen <tim.c.chen@...ux.intel.com>
CC: "Chen, Tim C" <tim.c.chen@...el.com>,
Ricardo Neri <ricardo.neri-calderon@...ux.intel.com>,
"Peter Zijlstra (Intel)" <peterz@...radead.org>,
Juri Lelli <juri.lelli@...hat.com>,
Vincent Guittot <vincent.guittot@...aro.org>,
"Neri, Ricardo" <ricardo.neri@...el.com>,
"Shankar, Ravi V" <ravi.v.shankar@...el.com>,
Ben Segall <bsegall@...gle.com>,
Daniel Bristot de Oliveira <bristot@...hat.com>,
Dietmar Eggemann <dietmar.eggemann@....com>,
"Brown, Len" <len.brown@...el.com>, Mel Gorman <mgorman@...e.de>,
"Wysocki, Rafael J" <rafael.j.wysocki@...el.com>,
Srinivas Pandruvada <srinivas.pandruvada@...ux.intel.com>,
Steven Rostedt <rostedt@...dmis.org>,
Valentin Schneider <vschneid@...hat.com>,
Ionela Voinescu <ionela.voinescu@....com>,
"x86@...nel.org" <x86@...nel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v3 06/10] sched/fair: Use the prefer_sibling flag of the
current sched domain
On 2023-02-09 at 15:05:03 -0800, Tim Chen wrote:
> On Thu, 2023-02-09 at 20:00 +0000, Chen, Tim C wrote:
> > > > static inline void update_sd_lb_stats(struct lb_env *env, struct
> > > > sd_lb_stats *sds) {
> > > > - struct sched_domain *child = env->sd->child;
> > > > struct sched_group *sg = env->sd->groups;
> > > > struct sg_lb_stats *local = &sds->local_stat;
> > > > struct sg_lb_stats tmp_sgs;
> > > > @@ -10045,9 +10044,11 @@ static inline void
> > > > update_sd_lb_stats(struct
> > > lb_env *env, struct sd_lb_stats *sd
> > > > sg = sg->next;
> > > > } while (sg != env->sd->groups);
> > > >
> > > > - /* Tag domain that child domain prefers tasks go to
> > > > siblings first */
> > > > - sds->prefer_sibling = child && child->flags &
> > > > SD_PREFER_SIBLING;
> > > > -
> > > > + /*
> > > > + * Tag domain that @env::sd prefers to spread excess
> > > > tasks among
> > > > + * sibling sched groups.
> > > > + */
> > > > + sds->prefer_sibling = env->sd->flags & SD_PREFER_SIBLING;
> > > >
> > > This does help fix the issue that non-SMT core fails to pull task
> > > from busy SMT-
> > > cores.
> > > And it also semantically changes the definination of prefer
> > > sibling. Do we also
> > > need to change this:
> > > if ((sd->flags & SD_ASYM_CPUCAPACITY) && sd->child)
> > > sd->child->flags &= ~SD_PREFER_SIBLING; might be:
> > > if ((sd->flags & SD_ASYM_CPUCAPACITY))
> > > sd->flags &= ~SD_PREFER_SIBLING;
> > >
> >
> > Yu,
> >
> > I think you are talking about the code in sd_init()
> > where SD_PREFER_SIBLING is first set
> > to "ON" and updated depending on SD_ASYM_CPUCAPACITY. The intention
> > of the code
> > is if there are cpus in the scheduler domain that have differing cpu
> > capacities,
> > we do not want to do spreading among the child groups in the sched
> > domain.
> > So the flag is turned off in the child group level and not the parent
> > level. But with your above
> > change, the parent's flag is turned off, leaving the child level flag
> > on.
> > This moves the level where spreading happens (SD_PREFER_SIBLING on)
> > up one level which is undesired (see table below).
> >
Yes, it moves the flag 1 level up. And if I understand correctly, with Ricardo's patch
applied, we have changed the original meaning of SD_PREFER_SIBLING:
Original: Tasks in this sched domain want to be migrated to another sched domain.
After init change: Tasks in the sched group under this sched domain want to
be migrated to a sibling group.
> >
> Sorry got a bad mail client messing up the table format. Updated below
>
> SD_ASYM_CPUCAPACITY SD_PREFER_SIBLING after init
> original code proposed
> SD Level
> root ON ON OFF (note: SD_PREFER_SIBLING unused at this level)
SD_PREFER_SIBLING is hornored in root level after the init proposal.
> first level ON OFF OFF
Before the init proposed, tasks in first level sd do not want
to be spreaded to a sibling sd. After the init proposeal, tasks
in all sched groups under root sd, do not want to be spreaded
to a sibling sched group(AKA first level sd)
thanks,
Chenyu
> second level OFF OFF ON
> third level OFF ON ON
>
> Tim
Powered by blists - more mailing lists