[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20220124184133.084799295@linuxfoundation.org>
Date: Mon, 24 Jan 2022 19:33:40 +0100
From: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
To: linux-kernel@...r.kernel.org
Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
stable@...r.kernel.org,
Vincent Donnefort <vincent.donnefort@....com>,
"Peter Zijlstra (Intel)" <peterz@...radead.org>,
Valentin Schneider <valentin.schneider@....com>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Sasha Levin <sashal@...nel.org>
Subject: [PATCH 5.16 0231/1039] sched/fair: Fix per-CPU kthread and wakee stacking for asym CPU capacity
From: Vincent Donnefort <vincent.donnefort@....com>
[ Upstream commit 014ba44e8184e1acf93e0cbb7089ee847802f8f0 ]
select_idle_sibling() has a special case for tasks woken up by a per-CPU
kthread where the selected CPU is the previous one. For asymmetric CPU
capacity systems, the assumption was that the wakee couldn't have a
bigger utilization during task placement than it used to have during the
last activation. That was not considering uclamp.min which can completely
change between two task activations and as a consequence mandates the
fitness criterion asym_fits_capacity(), even for the exit path described
above.
Fixes: b4c9c9f15649 ("sched/fair: Prefer prev cpu in asymmetric wakeup path")
Signed-off-by: Vincent Donnefort <vincent.donnefort@....com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
Reviewed-by: Valentin Schneider <valentin.schneider@....com>
Reviewed-by: Dietmar Eggemann <dietmar.eggemann@....com>
Link: https://lkml.kernel.org/r/20211129173115.4006346-1-vincent.donnefort@arm.com
Signed-off-by: Sasha Levin <sashal@...nel.org>
---
kernel/sched/fair.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 97d89516fbea2..f2cf047b25e56 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -6400,7 +6400,8 @@ static int select_idle_sibling(struct task_struct *p, int prev, int target)
if (is_per_cpu_kthread(current) &&
in_task() &&
prev == smp_processor_id() &&
- this_rq()->nr_running <= 1) {
+ this_rq()->nr_running <= 1 &&
+ asym_fits_capacity(task_util, prev)) {
return prev;
}
--
2.34.1
Powered by blists - more mailing lists