[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAKfTPtAKz5zGwyNUtEeM+2JJNBNVnoKmFkXdFu1hjWD52_BwFg@mail.gmail.com>
Date: Fri, 15 Nov 2019 19:00:45 +0100
From: Vincent Guittot <vincent.guittot@...aro.org>
To: Peter Zijlstra <peterz@...radead.org>
Cc: linux-kernel <linux-kernel@...r.kernel.org>,
Ingo Molnar <mingo@...hat.com>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Juri Lelli <juri.lelli@...hat.com>,
Steven Rostedt <rostedt@...dmis.org>,
Mel Gorman <mgorman@...e.de>,
Doug Smythies <dsmythies@...us.net>,
"open list:THERMAL" <linux-pm@...r.kernel.org>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Thomas Gleixner <tglx@...utronix.de>,
Sargun Dhillon <sargun@...gun.me>, Tejun Heo <tj@...nel.org>,
Xie XiuQi <xiexiuqi@...wei.com>, xiezhipeng1@...wei.com,
Srinivas Pandruvada <srinivas.pandruvada@...ux.intel.com>
Subject: Re: [PATCH v4] sched/freq: move call to cpufreq_update_util
On Fri, 15 Nov 2019 at 18:44, Peter Zijlstra <peterz@...radead.org> wrote:
>
> On Fri, Nov 15, 2019 at 04:31:35PM +0100, Vincent Guittot wrote:
>
> > > @@ -7476,10 +7477,14 @@ static void update_blocked_averages(int cpu)
> > > * list_add_leaf_cfs_rq() for details.
> > > */
> > > for_each_leaf_cfs_rq_safe(rq, cfs_rq, pos) {
> > > + bool last = cfs_rq == &rq->cfs;
> > > struct sched_entity *se;
> > >
> > > - if (update_cfs_rq_load_avg(cfs_rq_clock_pelt(cfs_rq), cfs_rq))
> > > + if (update_cfs_rq_load_avg(cfs_rq_clock_pelt(cfs_rq), cfs_rq)) {
> > > update_tg_load_avg(cfs_rq, 0);
> > > + if (last)
> >
> > using this last make code more readable
> >
> > > + decayed = true;
> > > + }
> > >
> > > /* Propagate pending load changes to the parent, if any: */
> > > se = cfs_rq->tg->se[cpu];
> > > @@ -7490,7 +7495,7 @@ static void update_blocked_averages(int cpu)
> > > * There can be a lot of idle CPU cgroups. Don't let fully
> > > * decayed cfs_rqs linger on the list.
> > > */
> > > - if (cfs_rq_is_decayed(cfs_rq))
> > > + if (!last && cfs_rq_is_decayed(cfs_rq))
> > > list_del_leaf_cfs_rq(cfs_rq);
> >
> > Keeping root cfs in the list will not change anything now that
> > cfs_rq_util_change is in update_load_avg()
> > cfs_rq_util_change will not be called
>
> Oh but it does, since it will then keep triggering that hunk above on
> every period.
indeed
>
> > >
> > > /* Don't need periodic decay once load/util_avg are null */
> > > @@ -7498,6 +7503,9 @@ static void update_blocked_averages(int cpu)
> > > done = false;
> > > }
> > >
> > > + if (decayed || done)
> >
> > I'm not sure to get why you want to call cpufreq when done is true
> > which means that everything reaches 0
> > Why do you prefer to use done instead of ORing the decay of rt, dl,
> > irq and cfs ?
> >
> > > + cpufreq_update_util(rq, 0);
>
> Because we don't care about the rt,dl,irq decay anywhere else either. We
> only call cpufreq_update_util() for rq->cfs changes.
cpufreq_update_util is called for each enqueue/dequeue of rt/dl tasks
>
> Also, as I argued, realistically rt,dl and cfs decay on the same edge,
> so aside from some fuzz on the first period, they're all the same. But
But the 1st period can be the only one for the next 4sec
> even if they were not, why would we care about their exact edges here
> when we do no anywhere else.
>
> Not caring reduces the number of cpufreq_update_util() calls to one per
> period, instead of potentially many more.
>
> Doing the || done ensures never miss the all 0 case.
How can we miss it according to your explanation above ?
Powered by blists - more mailing lists