[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CABk29Nud5dBo9ZdYhPLAPiVtQ9qh8aPOytydp7EXa-rYyYaMHA@mail.gmail.com>
Date: Wed, 29 Mar 2023 11:54:58 -0700
From: Josh Don <joshdon@...gle.com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: mingo@...nel.org, vincent.guittot@...aro.org,
linux-kernel@...r.kernel.org, juri.lelli@...hat.com,
dietmar.eggemann@....com, rostedt@...dmis.org, bsegall@...gle.com,
mgorman@...e.de, bristot@...hat.com, corbet@....net,
qyousef@...alina.io, chris.hyser@...cle.com,
patrick.bellasi@...bug.net, pjt@...gle.com, pavel@....cz,
qperret@...gle.com, tim.c.chen@...ux.intel.com, timj@....org,
kprateek.nayak@....com, yu.c.chen@...el.com,
youssefesmat@...omium.org, joel@...lfernandes.org, efault@....de
Subject: Re: [PATCH 08/17] sched/fair: Implement an EEVDF like policy
On Wed, Mar 29, 2023 at 1:12 AM Peter Zijlstra <peterz@...radead.org> wrote:
>
> On Tue, Mar 28, 2023 at 06:26:51PM -0700, Josh Don wrote:
>
> > > @@ -5088,19 +5307,20 @@ dequeue_entity(struct cfs_rq *cfs_rq, st
> > > static void
> > > check_preempt_tick(struct cfs_rq *cfs_rq, struct sched_entity *curr)
> > > {
> > > - unsigned long ideal_runtime, delta_exec;
> > > + unsigned long delta_exec;
> > > struct sched_entity *se;
> > > s64 delta;
> > >
> > > - /*
> > > - * When many tasks blow up the sched_period; it is possible that
> > > - * sched_slice() reports unusually large results (when many tasks are
> > > - * very light for example). Therefore impose a maximum.
> > > - */
> > > - ideal_runtime = min_t(u64, sched_slice(cfs_rq, curr), sysctl_sched_latency);
> > > + if (sched_feat(EEVDF)) {
> > > + if (pick_eevdf(cfs_rq) != curr)
> > > + goto preempt;
> >
> > This could shortcircuit the loop in pick_eevdf once we find a best
> > that has less vruntime and sooner deadline than curr, since we know
> > we'll never pick curr in that case. Might help performance when we
> > have a large tree for this cfs_rq.
>
> Yeah, one of the things I did consider was having this set cfs_rq->next
> such that the reschedule pick doesn't have to do the pick again. But I
> figured keep things simple for now.
Yea that makes sense. I was thinking something similar along the lines
of cfs_rq->next as another way to avoid duplicate computation. But
agreed this can be a future optimization.
Powered by blists - more mailing lists