[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAKfTPtD+EiB4mnRD0z4wYg6PDn1vLKxu4fxrgfiqsm2G3+RrEQ@mail.gmail.com>
Date: Thu, 30 Mar 2023 19:05:17 +0200
From: Vincent Guittot <vincent.guittot@...aro.org>
To: Peter Zijlstra <peterz@...radead.org>
Cc: mingo@...nel.org, linux-kernel@...r.kernel.org,
juri.lelli@...hat.com, dietmar.eggemann@....com,
rostedt@...dmis.org, bsegall@...gle.com, mgorman@...e.de,
bristot@...hat.com, corbet@....net, qyousef@...alina.io,
chris.hyser@...cle.com, patrick.bellasi@...bug.net, pjt@...gle.com,
pavel@....cz, qperret@...gle.com, tim.c.chen@...ux.intel.com,
joshdon@...gle.com, timj@....org, kprateek.nayak@....com,
yu.c.chen@...el.com, youssefesmat@...omium.org,
joel@...lfernandes.org, efault@....de
Subject: Re: [PATCH 08/17] sched/fair: Implement an EEVDF like policy
On Thu, 30 Mar 2023 at 10:04, Peter Zijlstra <peterz@...radead.org> wrote:
>
> On Wed, Mar 29, 2023 at 04:35:25PM +0200, Vincent Guittot wrote:
>
> > IIUC how it works, Vd = ve + r / wi
> >
> > So for a same weight, the vd will be earlier but it's no more alway
> > true for different weight
>
> Correct; but for a heavier task the time also goes slower and since it
> needs more time, you want it to go first. But yes, this is weird at
> first glance.
Yeah, I understand that this is needed for bounding the lag to a
quantum max but that makes the latency prioritization less obvious and
not always aligned with what we want
let say that you have 2 tasks A and B waking up simultaneously with
the same vruntime; A has a negative latency nice to reflect some
latency constraint and B the default value. A will run 1st if they
both have the same prio which is aligned with their latency nice
values but B could run 1st if it increase its nice prio to reflect the
need for a larger cpu bandwidth so you can defeat the purpose of the
latency nice although there is no unfairness
A cares of its latency and set a negative latency nice to reduce its
request slice
A: w=2, r=4q B: w=2, r=6q
A |-------<
B |-----------<
---+---+---+---+---+---+---+-----------
V ^
A runs 1st because its Vd is earlier
A |-----<
B |-----------<
---+---+---+---+---+---+---+-----------
V ^
A |-----<
B |---------<
---+---+---+---+---+---+---+-----------
V ^
A |---<
B |---------<
---+---+---+---+---+---+---+-----------
V ^
A |---<
B |-------<
---+---+---+---+---+---+---+-----------
V ^
If B increases its nice because it wants more bandwidth but still
doesn't care of latency
A: w=2, r=4q B: w=4, r=6q
A |-----------<
B |---------<
---+-----+-----+-----+-----+-----+---+-----------
V ^
B runs 1st whereas A's latency nice is lower
A |-----------<
B |------<
---+-----+-----+-----+-----+-----+---+-----------
V ^
A |--------<
B |------<
---+-----+-----+-----+-----+-----+---+-----------
V ^
A |--------<
B |----<
---+-----+-----+-----+-----+-----+---+-----------
V ^
A |-----<
B |----<
---+-----+-----+-----+-----+-----+---+-----------
V ^
A |-----<
B |--<
---+-----+-----+-----+-----+-----+---+-----------
V ^
A |-----<
B |--<
---+-----+-----+-----+-----+-----+---+-----------
V ^
>
> Let us consider a 3 task scenario, where one task (A) is double weight
> wrt to the other two (B,C), and let them run one quanta (q) at a time.
>
> Each step will see V advance q/4.
>
> A: w=2, r=4q B: w=1, r=4q C: w=1, r=4q
>
> 1) A runs -- earliest deadline
>
> A |-------<
> B |---------------<
> C |---------------<
> ---+---+---+---+---+---+---+-----------
> V ^
>
> 2) B runs (tie break with C) -- A is ineligible due to v_a > V
>
> A |-----<
> B |---------------<
> C |---------------<
> ---+---+---+---+---+---+---+-----------
> V ^
>
> 3) A runs -- earliest deadline
>
> A |-----<
> B |-----------<
> C |---------------<
> ---+---+---+---+---+---+---+-----------
> V ^
>
> 4) C runs -- only eligible task
>
> A |---<
> B |-----------<
> C |---------------<
> ---+---+---+---+---+---+---+-----------
> V ^
>
> 5) similar to 1)
>
> A |---<
> B |-----------<
> C |-----------<
> ---+---+---+---+---+---+---+-----------
> V ^
>
> And we see that we get a very nice ABAC interleave, with the only other
> possible schedule being ACAB.
>
> By virtue of the heaver task getting a shorter virtual deadline it gets
> nicely interleaved with the other tasks and you get a very consistent
> schedule with very little choice.
>
> Like already said, step 2) is the only place we had a choice, and if we
> were to have given either B or C a shorter request (IOW latency-nice)
> that choice would have been fully determined.
>
> So increasing w gets you more time (and the shorter deadline causes the
> above interleaving), while for the same w, reducing r gets you picked
> earlier.
>
> Perhaps another way to look at it is that since heavier tasks run more
> (often) you've got to compete against it more often for latency.
>
>
> Does that help?
Powered by blists - more mailing lists