[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20190821183713.GF2349@hirez.programming.kicks-ass.net>
Date: Wed, 21 Aug 2019 20:37:13 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Qais Yousef <qais.yousef@....com>
Cc: Peng Liu <iwtbavbm@...il.com>, linux-kernel@...r.kernel.org,
mingo@...hat.com
Subject: Re: [PATCH] sched/fair: eliminate redundant code in sched_slice()
On Wed, Aug 21, 2019 at 04:15:24PM +0100, Qais Yousef wrote:
> On 08/16/19 22:12, Peng Liu wrote:
> > Since sched_slice() is used in high frequency,
> > small change should also make sense.
> >
> > Signed-off-by: Peng Liu <iwtbavbm@...il.com>
> > ---
> > kernel/sched/fair.c | 11 ++++-------
> > 1 file changed, 4 insertions(+), 7 deletions(-)
> >
> > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > index 1054d2cf6aaa..6ae2a507aac0 100644
> > --- a/kernel/sched/fair.c
> > +++ b/kernel/sched/fair.c
> > @@ -694,19 +694,16 @@ static u64 sched_slice(struct cfs_rq *cfs_rq, struct sched_entity *se)
> > u64 slice = __sched_period(cfs_rq->nr_running + !se->on_rq);
> >
> > for_each_sched_entity(se) {
> > - struct load_weight *load;
> > struct load_weight lw;
> >
> > cfs_rq = cfs_rq_of(se);
> > - load = &cfs_rq->load;
> > + lw = cfs_rq->load;
> >
> > - if (unlikely(!se->on_rq)) {
> > + if (unlikely(!se->on_rq))
> > lw = cfs_rq->load;
> >
> > - update_load_add(&lw, se->load.weight);
> > - load = &lw;
> > - }
> > - slice = __calc_delta(slice, se->load.weight, load);
> > + update_load_add(&lw, se->load.weight);
> > + slice = __calc_delta(slice, se->load.weight, &lw);
>
> Unless I misread the diff, you changed the behavior.
>
> update_load_add() is only called if (unlikely(!se->on_rq)), but with your
> change it is called unconditionally. And lw is set twice.
>
> I think what you intended is this?
So I'd really rather hold off on this; Rik is poking at getting rid of
all of this hierarchical crud in one go.
Powered by blists - more mailing lists