[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20191121153029.GA105938@google.com>
Date: Thu, 21 Nov 2019 15:30:29 +0000
From: Quentin Perret <qperret@...gle.com>
To: Valentin Schneider <valentin.schneider@....com>
Cc: linux-kernel@...r.kernel.org, peterz@...radead.org,
mingo@...nel.org, vincent.guittot@...aro.org,
dietmar.eggemann@....com, patrick.bellasi@...bug.net,
qais.yousef@....com, morten.rasmussen@....com
Subject: Re: [PATCH 3/3] sched/fair: Consider uclamp for "task fits capacity"
checks
On Thursday 21 Nov 2019 at 14:51:06 (+0000), Valentin Schneider wrote:
> On 21/11/2019 13:30, Quentin Perret wrote:
> > On Thursday 21 Nov 2019 at 12:56:39 (+0000), Valentin Schneider wrote:
> >>> @@ -6274,6 +6274,15 @@ static int find_energy_efficient_cpu(struct task_struct *p, int prev_cpu)
> >>> if (!fits_capacity(util, cpu_cap))
> >>> continue;
> >>>
> >>> + /*
> >>> + * Skip CPUs that don't satisfy uclamp requests. Note
> >>> + * that the above already ensures the CPU has enough
> >>> + * spare capacity for the task; this is only really for
> >>> + * uclamp restrictions.
> >>> + */
> >>> + if (!task_fits_capacity(p, capacity_orig_of(cpu)))
> >>> + continue;
> >>
> >> This is partly redundant with the above, I think. What we really want here
> >> is just
> >>
> >> fits_capacity(uclamp_eff_value(p, UCLAMP_MIN), capacity_orig_of(cpu))
> >>
> >> but this would require some inline #ifdeffery.
> >
> > This suggested change lacks the UCLAMP_MAX part, which is a shame
> > because this is precisely in the EAS path that we should try and
> > down-migrate tasks if they have an appropriate max_clamp. So, your first
> > proposal made sense, IMO.
> >
>
> Hm right, had to let that spin in my head for a while but I think I got it.
>
> I was only really thinking of:
>
> (h960: LITTLE = 462 cap, big = 1024)
> p.uclamp.min = 512 -> skip LITTLEs regardless of the actual util_est
>
> but your point is we also want stuff like:
>
> p.uclamp.max = 300 -> accept LITTLEs regardless of the actual util_est
Right, sorry if my message wasn't clear.
> I'll keep the feec() change as-is and add something like the above in the
> changelog for v2.
>
> > Another option to avoid the redundancy would be to do something along
> > the lines of the totally untested diff below.
> >
> > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > index 69a81a5709ff..38cb5fe7ba65 100644
> > --- a/kernel/sched/fair.c
> > +++ b/kernel/sched/fair.c
> > @@ -6372,9 +6372,12 @@ static int find_energy_efficient_cpu(struct task_struct *p, int prev_cpu)
> > if (!cpumask_test_cpu(cpu, p->cpus_ptr))
> > continue;
> >
> > - /* Skip CPUs that will be overutilized. */
> > util = cpu_util_next(cpu, p, cpu);
> > cpu_cap = capacity_of(cpu);
> > + spare_cap = cpu_cap - util;
> > + util = uclamp_util_with(cpu_rq(cpu), util, p);
> > +
> > + /* Skip CPUs that will be overutilized. */
> > if (!fits_capacity(util, cpu_cap))
> > continue;
> >
> > @@ -6389,7 +6392,6 @@ static int find_energy_efficient_cpu(struct task_struct *p, int prev_cpu)
> > * Find the CPU with the maximum spare capacity in
> > * the performance domain
> > */
> > - spare_cap = cpu_cap - util;
> > if (spare_cap > max_spare_cap) {
> > max_spare_cap = spare_cap;
> > max_spare_cap_cpu = cpu;
> >
> > Thoughts ?
> >
>
> uclamp_util_with() (or uclamp_rq_util_with() ;)) picks the max between the
> rq-aggregated clamps and the task clamps, which isn't what we want. If the
> task has a low-ish uclamp.max (e.g. the 300 example from above) but the
> rq-wide max-aggregated uclamp.max is ~800, we'd clamp using that 800. It
> makes sense for frequency selection, but not for task placement IMO.
Right, but you could argue that this is in fact a correct behaviour.
What we want to know is 'is this CPU big enough to meet the capacity
request if I enqueue p there ?'. And the 'capacity request' is the
aggregated rq-wide clamped util, IMO.
If enqueuing 'p' on a given CPU will cause the rq-wide clamped util to
go above the CPU capacity, we want to skip that CPU.
The obvious case is if p's min_clamp is larger than the CPU capacity.
But similarly, if p's max_clamp is going to be ignored because of
another task with a larger max_clamp on the same rq, this is relevant
information too -- the resulting capacity request might be above the
CPU capacity if p's util_avg is large, so we should probably skip the
CPU too no ?
Are we gaining anything if we decide to not align the EAS path and the
sugov path ?
Thanks,
Quentin
Powered by blists - more mailing lists