[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160915144337.GF5016@twins.programming.kicks-ass.net>
Date: Thu, 15 Sep 2016 16:43:37 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Vincent Guittot <vincent.guittot@...aro.org>
Cc: mingo@...nel.org, linux-kernel@...r.kernel.org,
yuyang.du@...el.com, Morten.Rasmussen@....com,
linaro-kernel@...ts.linaro.org, dietmar.eggemann@....com,
pjt@...gle.com, bsegall@...gle.com
Subject: Re: [PATCH 4/7 v3] sched: propagate load during synchronous
attach/detach
On Mon, Sep 12, 2016 at 09:47:49AM +0200, Vincent Guittot wrote:
> +static inline void
> +update_tg_cfs_load(struct cfs_rq *cfs_rq, struct sched_entity *se)
> +{
> + struct cfs_rq *gcfs_rq = group_cfs_rq(se);
> + long delta, load = gcfs_rq->avg.load_avg;
> +
> + /* If the load of group cfs_rq is null, the load of the
> + * sched_entity will also be null so we can skip the formula
> + */
> + if (load) {
> + long tg_load;
> +
> + /* Get tg's load and ensure tg_load > 0 */
> + tg_load = atomic_long_read(&gcfs_rq->tg->load_avg) + 1;
> +
> + /* Ensure tg_load >= load and updated with current load*/
> + tg_load -= gcfs_rq->tg_load_avg_contrib;
> + tg_load += load;
> +
> + /* scale gcfs_rq's load into tg's shares*/
> + load *= scale_load_down(gcfs_rq->tg->shares);
> + load /= tg_load;
> +
> + /*
> + * we need to compute a correction term in the case that the
> + * task group is consuming <1 cpu so that we would contribute
> + * the same load as a task of equal weight.
> + */
> + if (tg_load < scale_load_down(gcfs_rq->tg->shares)) {
> + load *= tg_load;
> + load /= scale_load_down(gcfs_rq->tg->shares);
> + }
Note that you're reversing the exact scaling you just applied.
That is:
shares tg_load
load * ------- * ------- == load
tg_load shares
> + }
So something like:
shares = scale_load_down(gcfs_rq->tg->shares);
if (tg_load >= shares) {
load *= shares;
load /= tg_load;
}
should be the same as the above and saves a bunch of math, no?
Powered by blists - more mailing lists