[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170426224020.GB11348@wtj.duckdns.org>
Date: Wed, 26 Apr 2017 15:40:20 -0700
From: Tejun Heo <tj@...nel.org>
To: Vincent Guittot <vincent.guittot@...aro.org>
Cc: Ingo Molnar <mingo@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
linux-kernel <linux-kernel@...r.kernel.org>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Mike Galbraith <efault@....de>, Paul Turner <pjt@...gle.com>,
Chris Mason <clm@...com>, kernel-team@...com
Subject: Re: [PATCH 1/2] sched/fair: Fix how load gets propagated from cfs_rq
to its sched_entity
Hello,
On Wed, Apr 26, 2017 at 06:51:23PM +0200, Vincent Guittot wrote:
> > It's not temporary. The weight of a group is its shares, which is its
> > load fraction of the configured weight of the group. Assuming UP, if
> > you configure a group to the weight of 1024 and have any task running
> > full-tilt in it, the group will converge to the load of 1024. The
> > problem is that the propagation logic is currently doing something
> > completely different and temporarily push down the load whenever it
> > triggers.
>
> Ok, I see your point and agree that there is an issue when propagating
> load_avg of a task group which has tasks with lower weight than the share
> but your proposal has got issue because it uses runnable_load_avg instead
> of load_avg and this makes propagation of loadavg_avg incorrect, something
> like below which keeps using load_avg solve the problem
>
> + if (gcfs_rq->load.weight) {
> + long shares = scale_load_down(calc_cfs_shares(gcfs_rq, gcfs_rq->tg));
> +
> + load = min(gcfs_rq->avg.load_avg *
> + shares / scale_load_down(gcfs_rq->load.weight), shares);
>
> I have run schbench with the change above on v4.11-rc8 and latency are ok
Hmm... so, I'll test this but this wouldn't solve the problem of
root's runnable_load_avg being out of sync with the approximate sum of
all task loads, which is the cause of the latencies that I'm seeing.
Are you saying that with the above change, you're not seeing the
higher latency issue that you reported in the other reply?
Thanks.
--
tejun
Powered by blists - more mailing lists