lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  PHC 
Open Source and information security mailing list archives
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 31 Mar 2016 04:16:49 +0800
From:	Yuyang Du <>
	Yuyang Du <>
Subject: [PATCH RESEND v2 0/6] sched/fair: Clean up sched metric definitions

Hi Peter,

This patch searies is left in last year, and thus I resend it. Would you
please give it a look?

The previous version is at

This series cleans up the sched metrics, changes include:
(1) Define SCHED_FIXEDPOINT_SHIFT for all fixed point arithmetic scaling.
(2) Get rid of confusing scaling factors: SCHED_LOAD_SHIFT and SCHED_LOAD_SCALE,
    and thus only levae NICE_0_LOAD (for load) and SCHED_CAPACITY_SCALE (for util).
(3) Consistently use SCHED_CAPACITY_SCALE for util related.
(4) Add more detailed introduction to the sched metrics.
(5) Get rid of unnecessary extra scaling up and down for load.
(6) Rename the mappings between priority (user) and load (kernel).
(7) Remove/replace inactive code.

So, except for (5), we did not change any logic. Per request by Ingo, I checked
the disassembly of kernel/sched/built-in.o before vs. after the patches. But
since the very first patch to the end, there are a bunch of "offset" changes,
all like the pattern:

     60e3:      eb 21                   jmp    6106 <rq_clock+0x7c>
-    60e5:      be db 02 00 00          mov    $0x2db,%esi
+    60e5:      be e0 02 00 00          mov    $0x2e0,%esi

I have no idea what is changed, but venture a guess, code layout changed a bit?

Anyway, thanks a lot to Ben, Morten, Dietmar, Vincent, and others who provided
valuable comments.

v2 changes:
- Fix bugs in calculate_imbalance(), thanks to Vincent
- Fix "#if 0" for increased kernel load, suggested by Ingo


Yuyang Du (6):
  sched/fair: Generalize the load/util averages resolution definition
  sched/fair: Remove SCHED_LOAD_SHIFT and SCHED_LOAD_SCALE
  sched/fair: Add introduction to the sched load avg metrics
  sched/fair: Remove scale_load_down() for load_avg
  sched/fair: Rename scale_load() and scale_load_down()
  sched/fair: Remove unconditionally inactive code

 include/linux/sched.h | 81 +++++++++++++++++++++++++++++++++++++++++++--------
 init/Kconfig          | 16 ++++++++++
 kernel/sched/core.c   |  8 ++---
 kernel/sched/fair.c   | 33 ++++++++++-----------
 kernel/sched/sched.h  | 52 +++++++++++++++------------------
 5 files changed, 127 insertions(+), 63 deletions(-)


Powered by blists - more mailing lists