[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20220124184029.069413824@linuxfoundation.org>
Date: Mon, 24 Jan 2022 19:38:20 +0100
From: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
To: linux-kernel@...r.kernel.org
Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
stable@...r.kernel.org,
Vincent Donnefort <vincent.donnefort@....com>,
"Peter Zijlstra (Intel)" <peterz@...radead.org>,
Valentin Schneider <valentin.schneider@....com>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Sasha Levin <sashal@...nel.org>
Subject: [PATCH 5.10 135/563] sched/fair: Fix per-CPU kthread and wakee stacking for asym CPU capacity
From: Vincent Donnefort <vincent.donnefort@....com>
[ Upstream commit 014ba44e8184e1acf93e0cbb7089ee847802f8f0 ]
select_idle_sibling() has a special case for tasks woken up by a per-CPU
kthread where the selected CPU is the previous one. For asymmetric CPU
capacity systems, the assumption was that the wakee couldn't have a
bigger utilization during task placement than it used to have during the
last activation. That was not considering uclamp.min which can completely
change between two task activations and as a consequence mandates the
fitness criterion asym_fits_capacity(), even for the exit path described
above.
Fixes: b4c9c9f15649 ("sched/fair: Prefer prev cpu in asymmetric wakeup path")
Signed-off-by: Vincent Donnefort <vincent.donnefort@....com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
Reviewed-by: Valentin Schneider <valentin.schneider@....com>
Reviewed-by: Dietmar Eggemann <dietmar.eggemann@....com>
Link: https://lkml.kernel.org/r/20211129173115.4006346-1-vincent.donnefort@arm.com
Signed-off-by: Sasha Levin <sashal@...nel.org>
---
kernel/sched/fair.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index a7589552be5fc..2a33cb5a10e59 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -6286,7 +6286,8 @@ static int select_idle_sibling(struct task_struct *p, int prev, int target)
if (is_per_cpu_kthread(current) &&
in_task() &&
prev == smp_processor_id() &&
- this_rq()->nr_running <= 1) {
+ this_rq()->nr_running <= 1 &&
+ asym_fits_capacity(task_util, prev)) {
return prev;
}
--
2.34.1
Powered by blists - more mailing lists