[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240704235652.n2wtpwck43umh4dq@airbuntu>
Date: Fri, 5 Jul 2024 00:56:52 +0100
From: Qais Yousef <qyousef@...alina.io>
To: Vincent Guittot <vincent.guittot@...aro.org>
Cc: Xuewen Yan <xuewen.yan@...soc.com>, dietmar.eggemann@....com,
mingo@...hat.com, peterz@...radead.org, juri.lelli@...hat.com,
rostedt@...dmis.org, bsegall@...gle.com, mgorman@...e.de,
bristot@...hat.com, vschneid@...hat.com, christian.loehle@....com,
vincent.donnefort@....com, ke.wang@...soc.com, di.shen@...soc.com,
xuewen.yan94@...il.com, linux-kernel@...r.kernel.org
Subject: Re: [PATCH V2 2/2] sched/fair: Use actual_cpu_capacity everywhere in
util_fits_cpu()
On 07/03/24 16:54, Vincent Guittot wrote:
> On Wed, 3 Jul 2024 at 13:54, Qais Yousef <qyousef@...alina.io> wrote:
> >
> > On 07/02/24 15:25, Vincent Guittot wrote:
> >
> > > > > *
> > > > > * Only exception is for HW or cpufreq pressure since it has a direct impact
> > > > > * on available OPP of the system.
> > > > > @@ -5011,7 +5011,7 @@ static inline int util_fits_cpu(unsigned long util,
> > > > > * For uclamp_max, we can tolerate a drop in performance level as the
> > > > > * goal is to cap the task. So it's okay if it's getting less.
> > > > > */
> > > > > - capacity_orig = arch_scale_cpu_capacity(cpu);
> > > > > + capacity_actual = get_actual_cpu_capacity(cpu);
> > > > >
> > > > > /*
> > > > > * We want to force a task to fit a cpu as implied by uclamp_max.
> > > > > @@ -5039,7 +5039,7 @@ static inline int util_fits_cpu(unsigned long util,
> > > > > * uclamp_max request.
> > > > > *
> > > > > * which is what we're enforcing here. A task always fits if
> > > > > - * uclamp_max <= capacity_orig. But when uclamp_max > capacity_orig,
> > > > > + * uclamp_max <= capacity_actual. But when uclamp_max > capacity_actual,
> > > > > * the normal upmigration rules should withhold still.
> > > > > *
> > > > > * Only exception is when we are on max capacity, then we need to be
> > > > > @@ -5050,8 +5050,8 @@ static inline int util_fits_cpu(unsigned long util,
> > > > > * 2. The system is being saturated when we're operating near
> > > > > * max capacity, it doesn't make sense to block overutilized.
> > > > > */
> > > > > - uclamp_max_fits = (capacity_orig == SCHED_CAPACITY_SCALE) && (uclamp_max == SCHED_CAPACITY_SCALE);
> > > > > - uclamp_max_fits = !uclamp_max_fits && (uclamp_max <= capacity_orig);
> > > > > + uclamp_max_fits = (capacity_actual == SCHED_CAPACITY_SCALE) && (uclamp_max == SCHED_CAPACITY_SCALE);
> > > >
> > > > We should use capacity_orig here. We are checking if the CPU is the max
> > > > capacity CPU.
> > >
> > > I was also wondering what would be the best choice there. If we
> > > consider that we have only one performance domain with all max
> > > capacity cpus then I agree that we should keep capacity_orig as we
> > > can't find a better cpu that would fit. But is it always true that all
> > > max cpu are tied to the same perf domain ?
> >
> > Hmm I could be not thinking straight today. But the purpose of this check is to
> > ensure overutilized can trigger for the common case where a task will always
> > fit the max capacity cpu (whether it's on a single pd or multiple ones). For
> > that corner case fits_capacity() should be the only fitness check otherwise
> > overutilized will never trigger by default.
>
> Ok, so I messed up several use cases but in fact both are similar ...
>
> if capacity_actual != SCHED_CAPACITY_SCALE and uclamp_max ==
> SCHED_CAPACITY_SCALE
> then uclamp_max_fits = (capacity_actual == SCHED_CAPACITY_SCALE) &&
> (uclamp_max == SCHED_CAPACITY_SCALE) is false
> and uclamp_max_fits = !uclamp_max_fits && (uclamp_max <=
> capacity_actual); is also false because (uclamp_max <=
> capacity_actual) is always false
>
> if capacity_actual == SCHED_CAPACITY_SCALE and uclamp_max ==
> SCHED_CAPACITY_SCALE
> then we are the same as with capacity_orig
Right. The condition is becoming less readable, but you're right it doesn't
change functionality.
Xuewen, could you put something in the commit message please to remind us in
the future that we thought about this and it is fine?
Thanks!
--
Qais Yousef
Powered by blists - more mailing lists