[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230602150633.GJ620383@hirez.programming.kicks-ass.net>
Date: Fri, 2 Jun 2023 17:06:33 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Vincent Guittot <vincent.guittot@...aro.org>
Cc: mingo@...nel.org, linux-kernel@...r.kernel.org,
juri.lelli@...hat.com, dietmar.eggemann@....com,
rostedt@...dmis.org, bsegall@...gle.com, mgorman@...e.de,
bristot@...hat.com, corbet@....net, qyousef@...alina.io,
chris.hyser@...cle.com, patrick.bellasi@...bug.net, pjt@...gle.com,
pavel@....cz, qperret@...gle.com, tim.c.chen@...ux.intel.com,
joshdon@...gle.com, timj@....org, kprateek.nayak@....com,
yu.c.chen@...el.com, youssefesmat@...omium.org,
joel@...lfernandes.org, efault@....de, tglx@...utronix.de
Subject: Re: [PATCH 11/15] sched/eevdf: Better handle mixed slice length
On Fri, Jun 02, 2023 at 03:45:18PM +0200, Vincent Guittot wrote:
> On Wed, 31 May 2023 at 14:47, Peter Zijlstra <peterz@...radead.org> wrote:
> > +static inline bool
> > +entity_has_slept(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags)
> > +{
> > + u64 now;
> > +
> > + if (!(flags & ENQUEUE_WAKEUP))
> > + return false;
> > +
> > + if (flags & ENQUEUE_MIGRATED)
> > + return true;
> > +
> > + now = rq_clock_task(rq_of(cfs_rq));
> > + return (s64)(se->exec_start - now) >= se->slice;
> > +}
> > +
> > static void
> > place_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags)
> > {
> > @@ -4930,6 +4947,19 @@ place_entity(struct cfs_rq *cfs_rq, stru
> > lag = se->vlag;
> >
> > /*
> > + * For latency sensitive tasks; those that have a shorter than
> > + * average slice and do not fully consume the slice, transition
> > + * to EEVDF placement strategy #2.
> > + */
> > + if (sched_feat(PLACE_FUDGE) &&
> > + (cfs_rq->avg_slice > se->slice * cfs_rq->avg_load) &&
> > + entity_has_slept(cfs_rq, se, flags)) {
> > + lag += vslice;
> > + if (lag > 0)
> > + lag = 0;
>
> This PLACE_FUDGE looks quite not a good heuristic because it breaks
> the better fair sharing of cpu bandwidth that EEVDF is supposed to
> bring. Furthermore, it breaks the isolation between cpu bandwidth and
> latency because playing with latency_nice will impact your cpu
> bandwidth
Yeah, probably :/ Even though entity_has_slept() ensures the task slept
for at least one slice, that's probably not enough to preserve the
bandwidth contraints.
The fairness analysis in the paper conveniently avoids all 'interesting'
cases, including their own placement policies.
I'll sit on this one longer and think a bit more about it.
Powered by blists - more mailing lists