[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <e873dfbb-d13e-93c7-251e-dea90c3b40b5@arm.com>
Date: Mon, 12 Nov 2018 18:58:13 -0800
From: Dietmar Eggemann <dietmar.eggemann@....com>
To: Vincent Guittot <vincent.guittot@...aro.org>, peterz@...radead.org,
mingo@...nel.org, linux-kernel@...r.kernel.org
Cc: rjw@...ysocki.net, Morten.Rasmussen@....com,
patrick.bellasi@....com, pjt@...gle.com, bsegall@...gle.com,
thara.gopinath@...aro.org, pkondeti@...eaurora.org,
quentin.perret@....com
Subject: Re: [PATCH v6 0/2] sched/fair: update scale invariance of PELT
On 11/9/18 8:20 AM, Vincent Guittot wrote:
> This new version of the scale invariance patchset adds an important change
> compare to v3 and before. It still scales the time to reflect the
> amount of work that has been done during the elapsed running time but this is
> now done at rq level instead of per entity and rt/dl/cfs_rq. The main
> advantage is that it is done once per clock update and we don't need to
> maintain per sched_avg's stolen_idle_time anymore. This also ensures that
> all pelt signals will be always synced for a rq.
>
> The 1st patch makes available rq_of() helper function for pelt.c file and
> the 2nd patch implements the new scaling algorithm
>
> Changes since v5:
> - Fix running_sum scaling in update_tg_cfs_runnable() raised by Dietmar
> - Remove unused cpu parameters raised by Dietmar
I just re-discovered ... the comment for the definition of struct
sched_avg in include/linux/sched.h also mentioned the different ways how
we do invariance for load_avg and util_avg.
[...]
Powered by blists - more mailing lists