[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87fsrlcwcb.mognet@arm.com>
Date: Wed, 24 Nov 2021 17:11:32 +0000
From: Valentin Schneider <valentin.schneider@....com>
To: Vincent Donnefort <vincent.donnefort@....com>,
peterz@...radead.org, mingo@...hat.com, vincent.guittot@...aro.org
Cc: linux-kernel@...r.kernel.org, mgorman@...hsingularity.net,
dietmar.eggemann@....com,
Vincent Donnefort <vincent.donnefort@....com>
Subject: Re: [PATCH] sched/fair: Fix per-CPU kthread and wakee stacking for asym CPU capacity
On 24/11/21 14:14, Vincent Donnefort wrote:
> A shortcut has been introduced in select_idle_sibling() to return prev_cpu
> if the wakee is woken up by a per-CPU kthread. This is an issue for
> asymmetric CPU capacity systems where the wakee might not fit prev_cpu
> anymore. Evaluate asym_fits_capacity() for prev_cpu before using that
> shortcut.
>
> Fixes: 52262ee567ad ("sched/fair: Allow a per-CPU kthread waking a task to stack on the same CPU, to fix XFS performance regression")
Shouldn't that rather be
b4c9c9f15649 ("sched/fair: Prefer prev cpu in asymmetric wakeup path")
? This is an ulterior commit to the one you point to, and before then
asymmetric CPU systems wouldn't use any of the sis() heuristics.
I reportedly reviewed said commit back then, and don't recall anything
specific about that conditional... The cover-letter for v2 states:
https://lore.kernel.org/lkml/20201028174412.680-1-vincent.guittot@linaro.org/
"""
don't check capacity for the per-cpu kthread UC because the assumption is
that the wakee queued work for the per-cpu kthread that is now complete and
the task was already on this cpu.
"""
So the assumption here is that current is gonna sleep right after waking up
p, so current's utilization doesn't matter, and p was already on prev, so
it should fit there...
I'm thinking things should actually be OK with your other patch that
excludes 'current == swapper' from this condition.
> Signed-off-by: Vincent Donnefort <vincent.donnefort@....com>
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 6291876a9d32..b90dc6fd86ca 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -6410,7 +6410,8 @@ static int select_idle_sibling(struct task_struct *p, int prev, int target)
> */
> if (is_per_cpu_kthread(current) &&
> prev == smp_processor_id() &&
> - this_rq()->nr_running <= 1) {
> + this_rq()->nr_running <= 1 &&
> + asym_fits_capacity(task_util, prev)) {
> return prev;
> }
>
> --
> 2.25.1
Powered by blists - more mailing lists