[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <402b70ecb4d362ab6975b00a715872d585a18e35.camel@surriel.com>
Date: Mon, 01 Jul 2019 16:53:37 -0400
From: Rik van Riel <riel@...riel.com>
To: Josef Bacik <josef@...icpanda.com>
Cc: linux-kernel@...r.kernel.org, kernel-team@...com, pjt@...gle.com,
dietmar.eggemann@....com, peterz@...radead.org, mingo@...hat.com,
morten.rasmussen@....com, tglx@...utronix.de,
mgorman@...hsingularity.net, vincent.guittot@...aro.org
Subject: Re: [PATCH 09/10] sched,fair: add helper functions for flattened
runqueue
On Mon, 2019-07-01 at 16:20 -0400, Josef Bacik wrote:
>
>
> > +static unsigned long task_se_h_weight(struct sched_entity *se)
> > +{
> > + struct cfs_rq *cfs_rq;
> > +
> > + if (!task_se_in_cgroup(se))
> > + return se->load.weight;
> > +
> > + cfs_rq = group_cfs_rq_of_parent(se);
> > + update_cfs_rq_h_load(cfs_rq);
> > +
> > + /* Reduce the load.weight by the h_load of the group the task
> > is in. */
> > + return (cfs_rq->h_load * se->load.weight) >>
> > SCHED_FIXEDPOINT_SHIFT;
>
> This should be
>
> scale_load_down(cfs_rq->h_load * se->load.weight);
That may be the same mathematically, but it is
different conceptually.
If we convert CFS to have full load resolution with
cgroups (which we probably want), then scale_load_down
becomes a noop, while this shift continues doing the
right thing.
--
All Rights Reversed.
Download attachment "signature.asc" of type "application/pgp-signature" (489 bytes)
Powered by blists - more mailing lists