[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20240306222838.15087-1-daweilics@gmail.com>
Date: Wed, 6 Mar 2024 14:28:37 -0800
From: Dawei Li <daweilics@...il.com>
To:
Cc: Dawei Li <daweilics@...il.com>,
Ingo Molnar <mingo@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Juri Lelli <juri.lelli@...hat.com>,
Vincent Guittot <vincent.guittot@...aro.org>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Steven Rostedt <rostedt@...dmis.org>,
Ben Segall <bsegall@...gle.com>,
Mel Gorman <mgorman@...e.de>,
Daniel Bristot de Oliveira <bristot@...hat.com>,
Valentin Schneider <vschneid@...hat.com>,
linux-kernel@...r.kernel.org
Subject: [PATCH] sched/fair: simplify __calc_delta()
Based on how __calc_delta() is called now, the input parameter, weight
is always NICE_0_LOAD. I think we don't need it as an input parameter
now?
Also, when weight is always NICE_0_LOAD, the initial fact value is
always 2^10, and the first fact_hi will always be 0. Thus, we can get
rid of the first if bock.
The previous comment "(delta_exec * (weight * lw->inv_weight)) >>
WMULT_SHIFT" seems to be assuming that lw->weight * lw->inv_weight is
always (approximately) equal to 2^WMULT_SHIFT. However, when
CONFIG_64BIT is set, lw->weight * lw->inv_weight is (approximately)
equal to 2^WMULT_SHIFT * 2^10. What remains true for both CONFIG_32BIT
and CONFIG_64BIT is: scale_load_down(lw->weight) * lw->inv_weight is
(approximately) equal to 2^WMULT_SHIFT. (Correct me if I am wrong.)
Also updated the comment for calc_delta_fair() to make it more
accurate.
Signed-off-by: Dawei Li <daweilics@...il.com>
---
kernel/sched/fair.c | 29 ++++++++++-------------------
1 file changed, 10 insertions(+), 19 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 6a16129f9a5c..c5cdb15f7d62 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -252,32 +252,23 @@ static void __update_inv_weight(struct load_weight *lw)
}
/*
- * delta_exec * weight / lw.weight
+ * delta_exec * NICE_0_LOAD / lw->weight
* OR
- * (delta_exec * (weight * lw->inv_weight)) >> WMULT_SHIFT
+ * (delta_exec * scale_load_down(NICE_0_LOAD) * lw->inv_weight) >> WMULT_SHIFT
*
- * Either weight := NICE_0_LOAD and lw \e sched_prio_to_wmult[], in which case
- * we're guaranteed shift stays positive because inv_weight is guaranteed to
- * fit 32 bits, and NICE_0_LOAD gives another 10 bits; therefore shift >= 22.
- *
- * Or, weight =< lw.weight (because lw.weight is the runqueue weight), thus
- * weight/lw.weight <= 1, and therefore our shift will also be positive.
+ * We're guaranteed shift stays positive because inv_weight is guaranteed to
+ * fit 32 bits, and scale_load_down(NICE_0_LOAD) gives another 10 bits;
+ * therefore shift >= 22.
*/
-static u64 __calc_delta(u64 delta_exec, unsigned long weight, struct load_weight *lw)
+static u64 __calc_delta(u64 delta_exec, struct load_weight *lw)
{
- u64 fact = scale_load_down(weight);
- u32 fact_hi = (u32)(fact >> 32);
+ u64 fact = scale_load_down(NICE_0_LOAD);
+ int fact_hi;
int shift = WMULT_SHIFT;
int fs;
__update_inv_weight(lw);
- if (unlikely(fact_hi)) {
- fs = fls(fact_hi);
- shift -= fs;
- fact >>= fs;
- }
-
fact = mul_u32_u32(fact, lw->inv_weight);
fact_hi = (u32)(fact >> 32);
@@ -291,12 +282,12 @@ static u64 __calc_delta(u64 delta_exec, unsigned long weight, struct load_weight
}
/*
- * delta /= w
+ * delta *= NICE_0_LOAD / se->load.weight
*/
static inline u64 calc_delta_fair(u64 delta, struct sched_entity *se)
{
if (unlikely(se->load.weight != NICE_0_LOAD))
- delta = __calc_delta(delta, NICE_0_LOAD, &se->load);
+ delta = __calc_delta(delta, &se->load);
return delta;
}
--
2.40.1
Powered by blists - more mailing lists