lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 10 Mar 2017 12:47:43 -0800
From:   Joel Fernandes <joelaf@...gle.com>
To:     linux-kernel@...r.kernel.org
Cc:     Joel Fernandes <joelaf@...gle.com>,
        Juri Lelli <Juri.Lelli@....com>,
        Patrick Bellasi <patrick.bellasi@....com>,
        Dietmar Eggemann <dietmar.eggemann@....com>,
        Peter Zijlstra <peterz@...radead.org>,
        Ingo Molnar <mingo@...hat.com>
Subject: [PATCH] sched: write better comments for weight calculations

This patch rewrites comments related task priorities and CPU usage
along with an example to show how it works.

Cc: Juri Lelli <Juri.Lelli@....com>
Cc: Patrick Bellasi <patrick.bellasi@....com>
Cc: Dietmar Eggemann <dietmar.eggemann@....com>
Cc: Peter Zijlstra <peterz@...radead.org>
Cc: Ingo Molnar <mingo@...hat.com>
Signed-off-by: Joel Fernandes <joelaf@...gle.com>
---
 kernel/sched/core.c | 27 +++++++++++++++++++--------
 1 file changed, 19 insertions(+), 8 deletions(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index c56fb57f2991..2175bf663f3d 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -8823,16 +8823,27 @@ void dump_cpu_task(int cpu)
 }
 
 /*
- * Nice levels are multiplicative, with a gentle 10% change for every
- * nice level changed. I.e. when a CPU-bound task goes from nice 0 to
- * nice 1, it will get ~10% less CPU time than another CPU-bound task
- * that remained on nice 0.
+ * Nice levels are multiplicative, with a gentle 10% relative change
+ * for every nice level changed. I.e. if there were 2 CPU-bound tasks
+ * of equal nice value and one of them goes from a nice level of 0 to 1
+ * then the task at nice level 1 will get ~5% less CPU time than before
+ * the change and the task that remained at nice level 0 will get ~5%
+ * more CPU time.
  *
  * The "10% effect" is relative and cumulative: from _any_ nice level,
- * if you go up 1 level, it's -10% CPU usage, if you go down 1 level
- * it's +10% CPU usage. (to achieve that we use a multiplier of 1.25.
- * If a task goes up by ~10% and another task goes down by ~10% then
- * the relative distance between them is ~25%.)
+ * if you go up 1 level, it's -10% relative CPU usage, if you go down
+ * by 1 level it's +10% CPU usage. To achieve that, we use a multiplier
+ * of 1.25. If a task goes up by ~5% and another task goes down by ~5%
+ * then the relative distance between their weights is ~25% as shown
+ * in the following example:
+ *
+ * Consider 2 tasks T1 and T2 which are scheduled within a sched_period
+ * of 10ms. Say T1 has a nice value 0 and T2 has a nice value 1,
+ * then their corresponding weights are 1024 for T1 and 820 for T2.
+ *
+ * The relative delta between their weights is ~25% (1.25 * 820 ~= 1024)
+ * T1's CPU slice = (1024 / (820 + 1024)) * 10 ~= 5.5ms  (55% usage)
+ * T2's CPU slice = (820  / (820 + 1024)) * 10 ~= 4.5ms  (45% usage)
  */
 const int sched_prio_to_weight[40] = {
  /* -20 */     88761,     71755,     56483,     46273,     36291,
-- 
2.12.0.246.ga2ecc84866-goog

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ