[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20210630141204.8197-1-xuewen.yan94@gmail.com>
Date: Wed, 30 Jun 2021 22:12:04 +0800
From: Xuewen Yan <xuewen.yan94@...il.com>
To: valentin.schneider@....com, mingo@...hat.com, peterz@...radead.org,
juri.lelli@...hat.com, vincent.guittot@...aro.org,
dietmar.eggemann@....com
Cc: rostedt@...dmis.org, bsegall@...gle.com, mgorman@...e.de,
bristot@...hat.com, linux-kernel@...r.kernel.org,
patrick.bellasi@...bug.net, qais.yousef@....com, qperret@...gle.com
Subject: [PATCH v2] sched/uclamp: Avoid getting unreasonable ucalmp value when rq is idle
From: Xuewen Yan <xuewen.yan@...soc.com>
Now in uclamp_rq_util_with(), when the task != NULL, the uclamp_max as following:
uc_rq_max = rq->uclamp[UCLAMP_MAX].value;
uc_eff_max = uclamp_eff_value(p, UCLAMP_MAX);
uclamp_max = max{uc_rq_max, uc_eff_max};
Consider the following scenario:
(1)the rq is idle, the uc_rq_max is last runnable task's UCLAMP_MAX;
(2)the p's uc_eff_max < uc_rq_max.
As a result, the uclamp_max = uc_rq_max instead of uc_eff_max, it is unreasonable.
The scenario often happens in find_energy_efficient_cpu(), when the task has smaller UCLAMP_MAX.
When rq has UCLAMP_FLAG_IDLE flag, enqueuing the task will lift UCLAMP_FLAG_IDLE
and set the rq clamp as the task's via uclamp_idle_reset(). It doesn't need
to read the rq clamp. And it can also avoid the problems described above.
Fixes: 9d20ad7dfc9a ("sched/uclamp: Add uclamp_util_with()")
Signed-off-by: Xuewen Yan <xuewen.yan@...soc.com>
---
change v2:
*add Fixes(Valentin Schneider);
*ignore all rq clamp when idle (Valentin Schneider)
---
kernel/sched/sched.h | 21 ++++++++++++++-------
1 file changed, 14 insertions(+), 7 deletions(-)
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index c80d42e9589b..14a41a243f7b 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -2818,20 +2818,27 @@ static __always_inline
unsigned long uclamp_rq_util_with(struct rq *rq, unsigned long util,
struct task_struct *p)
{
- unsigned long min_util;
- unsigned long max_util;
+ unsigned long min_util = 0;
+ unsigned long max_util = 0;
if (!static_branch_likely(&sched_uclamp_used))
return util;
- min_util = READ_ONCE(rq->uclamp[UCLAMP_MIN].value);
- max_util = READ_ONCE(rq->uclamp[UCLAMP_MAX].value);
-
if (p) {
- min_util = max(min_util, uclamp_eff_value(p, UCLAMP_MIN));
- max_util = max(max_util, uclamp_eff_value(p, UCLAMP_MAX));
+ min_util = uclamp_eff_value(p, UCLAMP_MIN);
+ max_util = uclamp_eff_value(p, UCLAMP_MAX);
+
+ /*
+ * Ignore last runnable task's max clamp, as this task will
+ * reset it. Similarly, no need to read the rq's min clamp.
+ */
+ if (rq->uclamp_flags & UCLAMP_FLAG_IDLE)
+ goto out;
}
+ min_util = max_t(unsigned long, min_util, READ_ONCE(rq->uclamp[UCLAMP_MIN].value));
+ max_util = max_t(unsigned long, max_util, READ_ONCE(rq->uclamp[UCLAMP_MAX].value));
+out:
/*
* Since CPU's {min,max}_util clamps are MAX aggregated considering
* RUNNABLE tasks with _different_ clamps, we can end up with an
--
2.25.1
Powered by blists - more mailing lists