[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAKfTPtBWUz3zvOXx-s7_xsyPZU9WDYXz-6KpiC6hG9TVhFVXdw@mail.gmail.com>
Date: Mon, 28 Mar 2022 14:51:29 +0200
From: Vincent Guittot <vincent.guittot@...aro.org>
To: Dietmar Eggemann <dietmar.eggemann@....com>
Cc: mingo@...hat.com, peterz@...radead.org, juri.lelli@...hat.com,
rostedt@...dmis.org, bsegall@...gle.com, mgorman@...e.de,
linux-kernel@...r.kernel.org, parth@...ux.ibm.com,
qais.yousef@....com, chris.hyser@...cle.com,
pkondeti@...eaurora.org, Valentin.Schneider@....com,
patrick.bellasi@...bug.net, David.Laight@...lab.com,
pjt@...gle.com, pavel@....cz, tj@...nel.org,
dhaval.giani@...cle.com, qperret@...gle.com,
tim.c.chen@...ux.intel.com
Subject: Re: [RFC 5/6] sched/fair: Take into account latency nice at wakeup
On Mon, 28 Mar 2022 at 11:24, Dietmar Eggemann <dietmar.eggemann@....com> wrote:
>
> On 11/03/2022 17:14, Vincent Guittot wrote:
>
> [...]
>
> > @@ -4412,7 +4417,7 @@ int sched_fork(unsigned long clone_flags, struct task_struct *p)
> > p->prio = p->normal_prio = p->static_prio;
> > set_load_weight(p, false);
> >
> > - p->latency_nice = DEFAULT_LATENCY_NICE;
> > + p->latency_prio = NICE_TO_LATENCY(0);
> > /*
> > * We don't need the reset flag anymore after the fork. It has
> > * fulfilled its duty:
> > @@ -4420,6 +4425,9 @@ int sched_fork(unsigned long clone_flags, struct task_struct *p)
> > p->sched_reset_on_fork = 0;
> > }
> >
> > + /* Once latency_prio is set, update the latency weight */
> > + set_latency_weight(p);
>
> I thought we only have to do this in the `sched_reset_on_fork` case?
> Like we do with set_load_weight(). Can we not rely on dup_task_struct()
> in the other case?
>
> [...]
>
> > @@ -5648,6 +5677,9 @@ enqueue_task_fair(struct rq *rq, struct task_struct *p, int flags)
> > if (!task_new)
> > update_overutilized_status(rq);
> >
> > + if (rq->curr == rq->idle)
> > + check_preempt_from_idle(cfs_rq_of(&p->se), &p->se);
>
> This is done here (1) because check_preempt_wakeup() (2) is only called
> if p and rq->curr have CFS sched class?
Yes
>
>
> ttwu_do_activate()
> activate_task()
> enqueue_task <-- (1)
> ttwu_do_wakeup()
> check_preempt_curr()
> if (p->sched_class == rq->curr->sched_class)
> rq->curr->sched_class->check_preempt_curr() <-- (2)
>
> [...]
>
> > @@ -7008,6 +7059,10 @@ static int
> > wakeup_preempt_entity(struct sched_entity *curr, struct sched_entity *se)
> > {
> > s64 gran, vdiff = curr->vruntime - se->vruntime;
> > + int latency_weight = se->latency_weight - curr->latency_weight;
> > +
> > + latency_weight = min(latency_weight, se->latency_weight);
>
> Why the min out of latency_weight_diff(se, curr) and se->latency_weight
> here?
when there are 2 low latency tasks (weight 1024), there is no reason
to favor the the waking task so we take the diff (0 in this case)
When there are 2 high latency tolerant task (weight -1024), we want
to make sure to not preempt current task we take the weight (-1024)
instead of the diff
>
> [...]
Powered by blists - more mailing lists