[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAKfTPtAHqa6_1y3zbgaEFezd9t+qKALB6Ej5Q4Sdw3b+2d_Bdg@mail.gmail.com>
Date: Fri, 17 Jul 2020 16:19:08 +0200
From: Vincent Guittot <vincent.guittot@...aro.org>
To: Patrick Bellasi <patrick.bellasi@...bug.net>
Cc: LKML <linux-kernel@...r.kernel.org>,
Ingo Molnar <mingo@...hat.com>,
"Peter Zijlstra (Intel)" <peterz@...radead.org>,
Juri Lelli <juri.lelli@...hat.com>,
Paul Turner <pjt@...gle.com>, Ben Segall <bsegall@...gle.com>,
Thomas Gleixner <tglx@...utronix.de>,
Jonathan Corbet <corbet@....net>,
Dhaval Giani <dhaval.giani@...cle.com>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Josef Bacik <jbacik@...com>,
Chris Hyser <chris.hyser@...cle.com>,
Parth Shah <parth@...ux.ibm.com>
Subject: Re: [SchedulerWakeupLatency] Per-task vruntime wakeup bonus
On Thu, 16 Jul 2020 at 21:55, Patrick Bellasi
<patrick.bellasi@...bug.net> wrote:
>
>
> Hi Vincent,
>
> On Mon, Jul 13, 2020 at 14:59:51 +0200, Vincent Guittot <vincent.guittot@...aro.org> wrote...
>
> > On Fri, 10 Jul 2020 at 21:59, Patrick Bellasi <patrick.bellasi@...bug.net> wrote:
> >> On Fri, Jul 10, 2020 at 15:21:48 +0200, Vincent Guittot <vincent.guittot@...aro.org> wrote...
> >>
> >> [...]
> >>
> >> >> > C) Existing control paths
> >> >>
> >> >> Assuming:
> >> >>
> >> >> C: CFS task currently running on CPUx
> >> >> W: CFS task waking up on the same CPUx
> >> >>
> >> >> And considering the overall simplified workflow:
> >> >>
> >> >> core::try_to_wake_up()
> >> >>
> >> >> // 1) Select on which CPU W will run
> >> >> core::select_task_rq()
> >> >> fair::select_task_rq_fair()
> >> >>
> >> >> // 2) Enqueue W on the selected CPU
> >> >> core::ttwu_queue()
> >> >> core::ttwu_do_activate()
> >> >> core::activate_task()
> >> >> core::enqueue_task()
> >> >> fair::enqueue_task_fair()
> >> >> fair::enqueue_entity()
> >> >>
> >> >> // 3) Set W's vruntime bonus
> >> >> fair::place_entity()
> >> >> se->vruntime = ...
> >> >>
> >> >> // 4) Check if C can be preempted by W
> >> >> core::ttwu_do_wakeup()
> >> >> core::check_preempt_curr()
> >> >> fair::check_preempt_curr()
> >> >> fair::check_preempt_wakeup(curr, se)
> >> >> fair::wakeup_preempt_entity(curr, se)
> >> >> vdiff = curr.vruntime - se.vruntime
> >> >> return vdiff > wakeup_gran(se)
> >> >>
> >> >> We see that W preempts C iff:
> >> >>
> >> >> vdiff > wakeup_gran(se)
> >> >>
> >> >> Since:
> >> >>
> >> >> enqueue_entity(cfs_rq, se, flags)
> >> >> place_entity(cfs_rq, se, initial=0)
> >> >> thresh = sysctl_sched_latency / (GENTLE_FAIR_SLEEPERS ? 2 : 1)
> >> >> vruntime = cfs_rq->min_vruntime - thresh
> >> >> se->vruntime = max_vruntime(se->vruntime, vruntime)
> >> >>
> >> >> a waking task's W.vruntime can get a "vruntime bonus" up to:
> >> >> - 1 scheduler latency (w/ GENTLE_FAIR_SLEEPERS)
> >> >> - 1/2 scheduler latency (w/o GENTLE_FAIR_SLEEPERS)
> >> >>
> >> >>
> >> >> > D) Desired behavior
> >> >>
> >> >> The "vruntime bonus" (thresh) computed in place_entity() should have a
> >> >> per-task definition, which defaults to the current implementation.
> >> >>
> >> >> A bigger vruntime bonus can be configured for latency sensitive tasks.
> >> >> A smaller vruntime bonus can be configured for latency tolerant tasks.
> >> >
> >> > I'm not sure that adjusting what you called "vruntime bonus" is the
> >> > right way to provide some latency because it doesn't only provide a
> >> > wakeup latency bonus but also provides a runtime bonus.
> >>
> >> True, however that's what we already do but _just_ in an hard-coded way.
> >>
> >> A task waking up from sleep gets 1 sched_latency bonus, or 1/2 w/o
> >> FAIR_SLEEPERS. Point is that not all tasks are the same: for some this
> >
> > From a nice and fair PoV, it's not a bonus but the opposite. In fact
> > it's limiting how much credit, the task will keep from its sleep time.
>
> I agree about 'limiting a credit', thus being a _credit_ IMO it's a bonus
> and the limiting happens only with GENTLE_FAIR_SLEEPER.
>
> So, in general, tasks _waking up_ get a (limited) credit, i.e. a
> vruntime bonus.
This looks like nitpicking about terms so let move one
>
> Form the FAIR PoV it is even more a bonus since all the machinery AFAIU
> it's designed to give some vruntime bonus to _non_ CPU bound / batch
> tasks.
>
> That's done to compensate for them being suspended and thus not having a
> chance to consume all their fair CPU share in the previous activation.
>
> > Also keep in mind that this value is clamped by its vruntime so a task
> > can't get bonus
>
> True, at wakeup we clamped it with the SE (normalized) vruntime.
>
> But still since we do:
>
> se->vruntime = max(se->vruntime, cfs_rq->min_vruntime-VRUNTIME_BONUS)
> \---- A ----/ \--------------- B ---------------/
>
> The bigger B is the more likely we are to "penalize" the SE vuntime.
maybe for your use case but we should not consider one to be more
likely than another one
>
>
> >> bonus can be not really required, for others too small.
> >>
> >> Regarding the 'runtime bonus' I think it's kind of unavoidable,
> >> if we really want a latency sensitive task being scheduled
> >> before the others.
> >
> > That's where I disagree.
> > 2 tasks with the same nice priority must get the same running time
> > whatever their latency_nice priority.
>
> This is granted only in the very simple case of CPU bound tasks, and the
> mechanism I describe is not impacting those tasks.
>
> For sleeping tasks, if you consider all the approximations we do (gentle
> fair sleepers to begin with) we will never be "precise".
it's not because there are some uncertainties that we should add more
> ... not to speak of tasks migrations or running at different OPPs.
>
> > The only difference should be to select which one will run 1st or make
> > it easy for 1 task to preempt the other one but at the end both should
> > be able to get the same running time
>
> Agree, but I would say that we should aim at getting this result across
> few sched_latencies.
>
> IOW, granting the exact same fair CPU share within a sched_latency
> period with sleeping tasks I'm not convinced is something we already
> have.
>
> >> > It means that one can impact the running time by playing with
> >> > latency_nice whereas the goal is only to impact the wakeup latency.
> >>
> >> Well, but I'm not sure how much you can really gain considering that
> >> this bonus is given only at wakeup time: the task should keep
> >> suspending himself. It would get a better result by just asking for a
> >> lower nice value.
> >>
> >> Now, asking for a reduced nice value is RLIMIT_NICE and CAP_SYS_NICE
> >> protected. The same will be for latency_nice.
> >>
> >> Moreover, considering that by default tasks will get what we already
> >> have as hard-coded or less of a bonus, I don't see how easy should be to
> >> abuse.
> >>
> >> To the contrary we can introduce a very useful knob to allow certain
> >> tasks to voluntarily demote themselves and avoid annoying a currently
> >> running task.
> >>
> >> > Instead, it should weight the decision in wakeup_preempt_entity() and
> >> > wakeup_gran()
> >>
> >> In those functions we already take the task prio into consideration
> >> (ref details at the end of this message).
> >>
> >> Lower nice value tasks have more chances to preempt current since they
> >> will have a smaller wakeup_gran, indeed:
> >
> > yes, and this is there to ensure a fair distribution of running time
> > and prevent a task to increase significantly its vruntime compare to
> > others
>
> Don't follow you here :/
we take into account how fast the se's vruntime will increase during
the next sched period and and compare the end result to decide which
one will run 1st without breaking fair distribution
>
> > -1 min that se already got more runtime than current
> > 0 that se's vruntime will go above current's vruntime after a runtime
> > duration of sched_min_granularity
> > and 1 means that se got less runtime than current and its vruntime
> > will still be lower than current ones even after a runtime duration of
> > sched_min_granularity
> >
> > IMHO, latency_nice should impact the decision only for case 0 but not
> > the case -1 and 1.
> > That being said, we can extend the case 0 a bit to include the
> > situation where current's vruntime will become greater than se's
> > vruntimes after a runtime duration of sched_min_granularity like
> > below:
>
> (I hope I reformatted the following as you intended it)
yes you have
>
> > curr->vruntime
> > | wakeup_gran(se) | wakeup_gran(curr) |
> > current: se->vruntime +1 | 0 | -1
> > new: se->vruntime +1 | 0 | -1
> >
>
> Does that means to penalize a waking up tasks wrt current, such that an
> high latency_nice task waking up has less chances to preempt current?
yes. If a task doesn't need short scheduling latency why it should
preempt current one with lower latency_nice as an example
>
> >> we preempt IFF vdiff(se, current) > wakeup_gran(se)
> >> \----------------/ \-------------/
> >> A B
> >>
> >> While task's prio affects B, in this proposal, lantecy_nice works on the
> >> A side of the equation above by making it a bit more task specific.
> >>
> >> That said, it's true that both latency_nice and prio will ultimately
> >> play a role on how much CPU bandwidth a task gets.
> >
> > whereas nice is there to split CPU bandwidth between task according to
> > their prio, latency nice should not
>
> Of course that would be something new for which I see your concerns.
> is
> However, I still think that making a bit more tunable the task-agnostic
> and hard-coded heuristics we have for A (above) could give benefits for
> tasks that are very short running (compared to a sched_latency) while
If the task is really short running, it doesn't use all its running
time and should easily preempt other tasks at wakeup without the need
for an extended vruntime bonus or an extra CPU bandwidth
> still never really spoil/abuse the FAIR repartitioning of the CPU
> bandwidth.
>
> I'm thinking for example to graphics pipeline tasks which run for <1ms
> every 16ms.
>
Powered by blists - more mailing lists