lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 28 Apr 2016 10:56:07 +0800
From:	Yuyang Du <yuyang.du@...el.com>
To:	peterz@...radead.org, mingo@...nel.org,
	linux-kernel@...r.kernel.org
Cc:	bsegall@...gle.com, pjt@...gle.com, morten.rasmussen@....com,
	vincent.guittot@...aro.org, dietmar.eggemann@....com,
	juri.lelli@....com, Yuyang Du <yuyang.du@...el.com>
Subject: [PATCH 0/6] Optimize sched averages computation

I started to optimize __update_load_avg() for flat util hierarchy implementation.
Since it was started, let me finish.

The flat util hierarchy is not in this patchset. I am still pondering whether
we add sched_avg in rq to do it or simply and only update cfs_rq util when we
update the top cfs_rq (Dietmar and Vincent took this approach). I think this
needs some experiments.

Yuyang Du (6):
  sched/fair: Optimize sum computation with a lookup table
  sched/fair: Rename variable names for sched averages
  sched/fair: Change the variable to hold the number of periods to
    32bit integer
  sched/fair: Add __always_inline compiler attribute to
    __accumulate_sum()
  sched/fair: Optimize __update_sched_avg()
  documentation: Add scheuler/sched-avg.txt

 Documentation/scheduler/sched-avg.txt |  160 +++++++++++++++
 kernel/sched/fair.c                   |  352 +++++++++++++++++----------------
 2 files changed, 339 insertions(+), 173 deletions(-)
 create mode 100644 Documentation/scheduler/sched-avg.txt

-- 
1.7.9.5

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ