[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAKfTPtAk2A8zPgOfpbN0s4LZv+d7ABB9=5tAEMCbVrf263XtjA@mail.gmail.com>
Date: Tue, 21 Feb 2023 15:21:54 +0100
From: Vincent Guittot <vincent.guittot@...aro.org>
To: Peter Zijlstra <peterz@...radead.org>
Cc: mingo@...hat.com, juri.lelli@...hat.com, dietmar.eggemann@....com,
rostedt@...dmis.org, bsegall@...gle.com, mgorman@...e.de,
bristot@...hat.com, vschneid@...hat.com,
linux-kernel@...r.kernel.org, parth@...ux.ibm.com,
cgroups@...r.kernel.org, qyousef@...alina.io,
chris.hyser@...cle.com, patrick.bellasi@...bug.net,
David.Laight@...lab.com, pjt@...gle.com, pavel@....cz,
tj@...nel.org, qperret@...gle.com, tim.c.chen@...ux.intel.com,
joshdon@...gle.com, timj@....org, kprateek.nayak@....com,
yu.c.chen@...el.com, youssefesmat@...omium.org,
joel@...lfernandes.org
Subject: Re: [PATCH v10 5/9] sched/fair: Take into account latency priority at wakeup
On Tue, 21 Feb 2023 at 14:05, Peter Zijlstra <peterz@...radead.org> wrote:
>
> On Fri, Jan 13, 2023 at 03:12:30PM +0100, Vincent Guittot wrote:
> > @@ -6155,6 +6159,35 @@ static int sched_idle_cpu(int cpu)
> > }
> > #endif
> >
> > +static void set_next_buddy(struct sched_entity *se);
> > +
> > +static void check_preempt_from_others(struct cfs_rq *cfs, struct sched_entity *se)
> > +{
> > + struct sched_entity *next;
> > +
> > + if (se->latency_offset >= 0)
> > + return;
> > +
> > + if (cfs->nr_running <= 1)
> > + return;
> > + /*
> > + * When waking from another class, we don't need to check to preempt at
> > + * wakeup and don't set next buddy as a candidate for being picked in
> > + * priority.
> > + * In case of simultaneous wakeup when current is another class, the
> > + * latency sensitive tasks lost opportunity to preempt non sensitive
> > + * tasks which woke up simultaneously.
> > + */
> > +
> > + if (cfs->next)
> > + next = cfs->next;
> > + else
> > + next = __pick_first_entity(cfs);
> > +
> > + if (next && wakeup_preempt_entity(next, se) == 1)
> > + set_next_buddy(se);
> > +}
> > +
> > /*
> > * The enqueue_task method is called before nr_running is
> > * increased. Here we update the fair scheduling stats and
> > @@ -6241,14 +6274,15 @@ enqueue_task_fair(struct rq *rq, struct task_struct *p, int flags)
> > if (!task_new)
> > update_overutilized_status(rq);
> >
> > + if (rq->curr->sched_class != &fair_sched_class)
> > + check_preempt_from_others(cfs_rq_of(&p->se), &p->se);
> > +
> > enqueue_throttle:
> > assert_list_leaf_cfs_rq(rq);
> >
> > hrtick_update(rq);
> > }
>
> Hmm.. This sets a next selection when the task gets enqueued while not
> running a fair task -- and looses a wakeup preemption opportunity.
>
> Should we perhaps also do this for latency_nice == 0?, in any case I
> think this can be moved to its own patch to avoid doing too much in the
> one patch. It seems fairly self contained.
This function is then removed by patch 9 as the additional rb tree
fixes all cases
>
Powered by blists - more mailing lists