[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAKfTPtC0ytqNmexVFm6kzTSpj3r0kiOxs5MtHMhvR5rJuiz_Jw@mail.gmail.com>
Date: Wed, 24 Jun 2020 18:40:32 +0200
From: Vincent Guittot <vincent.guittot@...aro.org>
To: Valentin Schneider <valentin.schneider@....com>
Cc: Ingo Molnar <mingo@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Juri Lelli <juri.lelli@...hat.com>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Steven Rostedt <rostedt@...dmis.org>,
Ben Segall <bsegall@...gle.com>, Mel Gorman <mgorman@...e.de>,
linux-kernel <linux-kernel@...r.kernel.org>,
kernel test robot <rong.a.chen@...el.com>,
Phil Auld <pauld@...hat.com>, Hillf Danton <hdanton@...a.com>
Subject: Re: [PATCH] sched/cfs: change initial value of runnable_avg
On Wed, 24 Jun 2020 at 18:32, Valentin Schneider
<valentin.schneider@....com> wrote:
>
>
> On 24/06/20 16:44, Vincent Guittot wrote:
> > Some performance regression on reaim benchmark have been raised with
> > commit 070f5e860ee2 ("sched/fair: Take into account runnable_avg to classify group")
> >
> > The problem comes from the init value of runnable_avg which is initialized
> > with max value. This can be a problem if the newly forked task is finally
> > a short task because the group of CPUs is wrongly set to overloaded and
> > tasks are pulled less agressively.
> >
> > Set initial value of runnable_avg equals to util_avg to reflect that there
> > is no waiting time so far.
> >
> > Fixes: 070f5e860ee2 ("sched/fair: Take into account runnable_avg to classify group")
> > Reported-by: kernel test robot <rong.a.chen@...el.com>
> > Signed-off-by: Vincent Guittot <vincent.guittot@...aro.org>
> > ---
> > kernel/sched/fair.c | 2 +-
> > 1 file changed, 1 insertion(+), 1 deletion(-)
> >
> > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > index 0424a0af5f87..45e467bf42fc 100644
> > --- a/kernel/sched/fair.c
> > +++ b/kernel/sched/fair.c
> > @@ -806,7 +806,7 @@ void post_init_entity_util_avg(struct task_struct *p)
> > }
> > }
> >
> > - sa->runnable_avg = cpu_scale;
> > + sa->runnable_avg = sa->util_avg;
>
> IIRC we didn't go for this initially because hackbench behaved slightly
> worse with it. Did we end up re-evaluating this? Also, how does this reaim
yes. hackbench was slightly worse and it was the only inputs at that
time, that's why we decided to keep the other init. Since, Rong
reported a significant regression for reaim which is fixed by this
patch
> benchmark behave with it? I *think* the table from that regression thread
> says it behaves better, but I had a hard time parsing it (seems like it got
> damaged by line wrapping)
>
> Conceptually I'm all for it, so as long as the tests back it up:
> Reviewed-by: Valentin Schneider <valentin.schneider@....com>
>
> >
> > if (p->sched_class != &fair_sched_class) {
> > /*
>
Powered by blists - more mailing lists