[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <1462226078-31904-1-git-send-email-yuyang.du@intel.com>
Date: Tue, 3 May 2016 05:54:26 +0800
From: Yuyang Du <yuyang.du@...el.com>
To: peterz@...radead.org, mingo@...nel.org,
linux-kernel@...r.kernel.org
Cc: bsegall@...gle.com, pjt@...gle.com, morten.rasmussen@....com,
vincent.guittot@...aro.org, dietmar.eggemann@....com,
juri.lelli@....com, Yuyang Du <yuyang.du@...el.com>
Subject: [PATCH v2 00/12] sched/fair: Optimize and clean up sched averages
Hi Peter,
This patch series combines the previous cleanup and optimization
series. And as you and Ingo suggested, the increased kernel load
scale is reinstated when on 64BIT and FAIR_GROUP_SCHED. In addition
to that, the changes include Vincent's fix, typos fixes, changelog
and comment reword.
Thanks,
Yuyang
Yuyang Du (12):
sched/fair: Optimize sum computation with a lookup table
sched/fair: Rename variable names for sched averages
sched/fair: Change the variable to hold the number of periods to
32bit integer
sched/fair: Add __always_inline compiler attribute to
__accumulate_sum()
sched/fair: Optimize __update_sched_avg()
documentation: Add scheduler/sched-avg.txt
sched/fair: Generalize the load/util averages resolution definition
sched/fair: Remove SCHED_LOAD_SHIFT and SCHED_LOAD_SCALE
sched/fair: Add introduction to the sched average metrics
sched/fair: Remove scale_load_down() for load_avg
sched/fair: Rename scale_load() and scale_load_down()
sched/fair: Enable increased scale for kernel load
Documentation/scheduler/sched-avg.txt | 137 ++++++++++++
include/linux/sched.h | 81 ++++++-
kernel/sched/core.c | 8 +-
kernel/sched/fair.c | 398 +++++++++++++++++----------------
kernel/sched/sched.h | 48 ++--
5 files changed, 439 insertions(+), 233 deletions(-)
create mode 100644 Documentation/scheduler/sched-avg.txt
--
1.7.9.5
Powered by blists - more mailing lists