lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <1462305773-7832-1-git-send-email-yuyang.du@intel.com>
Date:	Wed,  4 May 2016 04:02:41 +0800
From:	Yuyang Du <yuyang.du@...el.com>
To:	peterz@...radead.org, mingo@...nel.org,
	linux-kernel@...r.kernel.org
Cc:	bsegall@...gle.com, pjt@...gle.com, morten.rasmussen@....com,
	vincent.guittot@...aro.org, dietmar.eggemann@....com,
	juri.lelli@....com, Yuyang Du <yuyang.du@...el.com>
Subject: [PATCH v3 00/12] sched/fair: Optimize and clean up sched averages

Hi Peter,

This patch series combines the previous cleanup and optimization
series. And as you and Ingo suggested, the increased kernel load
scale is reinstated when on 64BIT and FAIR_GROUP_SCHED.

This patch series should have no perceivable changes to load
and util except that load's range is increased by 1024 times.

My initial tests suggest that, see this previous post's link for
the figures: http://article.gmane.org/gmane.linux.kernel/2213506.
The workload is running 100us out of every 200us, and 2000us out
of every 8000us. Again fixed workload, fixed CPU, and fixed freq.

And I believe the codes should be cleaner and more efficient after
these patches.

The changes leading to this version include changelog and code
comment reword according to Peter's comments.

Thanks,
Yuyang

Yuyang Du (12):
  sched/fair: Optimize sum computation with a lookup table
  sched/fair: Rename variable names for sched averages
  sched/fair: Change the variable to hold the number of periods to
    32bit
  sched/fair: Add __always_inline compiler attribute to
    __accumulate_sum()
  sched/fair: Optimize __update_sched_avg()
  documentation: Add scheduler/sched-avg.txt
  sched/fair: Generalize the load/util averages resolution definition
  sched/fair: Remove SCHED_LOAD_SHIFT and SCHED_LOAD_SCALE
  sched/fair: Add introduction to the sched average metrics
  sched/fair: Remove scale_load_down() for load_avg
  sched/fair: Rename scale_load() and scale_load_down()
  sched/fair: Enable increased scale for kernel load

 Documentation/scheduler/sched-avg.txt |   94 ++++++++
 include/linux/sched.h                 |   81 ++++++-
 kernel/sched/core.c                   |    8 +-
 kernel/sched/fair.c                   |  400 +++++++++++++++++----------------
 kernel/sched/sched.h                  |   48 ++--
 5 files changed, 398 insertions(+), 233 deletions(-)
 create mode 100644 Documentation/scheduler/sched-avg.txt

-- 
1.7.9.5

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ