[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <1547631791-16018-1-git-send-email-vincent.guittot@linaro.org>
Date: Wed, 16 Jan 2019 10:43:08 +0100
From: Vincent Guittot <vincent.guittot@...aro.org>
To: peterz@...radead.org, mingo@...nel.org,
linux-kernel@...r.kernel.org
Cc: rjw@...ysocki.net, dietmar.eggemann@....com,
Morten.Rasmussen@....com, patrick.bellasi@....com, pjt@...gle.com,
bsegall@...gle.com, thara.gopinath@...aro.org,
pkondeti@...eaurora.org, quentin.perret@....com,
srinivas.pandruvada@...ux.intel.com,
Vincent Guittot <vincent.guittot@...aro.org>
Subject: [PATCH v8 0/3] sched/fair: update scale invariance of PELT
This new version of the scale invariance patchset adds an important change
compare to v3 and before. It still scales the time to reflect the
amount of work that has been done during the elapsed running time but this
is now done at rq level instead of per entity and rt/dl/cfs_rq. The main
advantage is that it is done once per clock update and we don't need to
maintain per sched_avg's stolen_idle_time anymore. This also ensures that
all pelt signals will be always synced for a rq.
Changes since v7:
- Add patch 3 to skip updating util_est when utilization is higher than
cpu's capacity
Vincent Guittot (3):
sched/fair: move rq_of helper function
sched/fair: update scale invariance of PELT
sched/pelt: skip updating util_est when utilization is higher than
cpu's capacity
include/linux/sched.h | 23 +++-------
kernel/sched/core.c | 1 +
kernel/sched/deadline.c | 6 +--
kernel/sched/fair.c | 72 ++++++++++++++-----------------
kernel/sched/pelt.c | 45 +++++++++++---------
kernel/sched/pelt.h | 111 ++++++++++++++++++++++++++++++++++++++++++++++--
kernel/sched/rt.c | 6 +--
kernel/sched/sched.h | 28 +++++++++++-
8 files changed, 206 insertions(+), 86 deletions(-)
--
2.7.4
Powered by blists - more mailing lists