[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87ilwhcycb.mognet@arm.com>
Date: Wed, 24 Nov 2021 16:28:20 +0000
From: Valentin Schneider <valentin.schneider@....com>
To: Vincent Donnefort <vincent.donnefort@....com>,
peterz@...radead.org, mingo@...hat.com, vincent.guittot@...aro.org
Cc: linux-kernel@...r.kernel.org, mgorman@...hsingularity.net,
dietmar.eggemann@....com,
Vincent Donnefort <vincent.donnefort@....com>
Subject: Re: [PATCH] sched/fair: Fix detection of per-CPU kthreads waking a task
On 24/11/21 15:42, Vincent Donnefort wrote:
> select_idle_sibling() will return prev_cpu for the case where the task is
> woken up by a per-CPU kthread. However, the idle task has been recently
> modified and is now identified by is_per_cpu_kthread(), breaking the
> behaviour described above. Using !is_idle_task() ensures we do not
> spuriously trigger that select_idle_sibling() exit path.
>
> Fixes: 00b89fe0197f ("sched: Make the idle task quack like a per-CPU kthread")
This patch-set is the gift that keeps on giving... I owe a lot of folks a
lot of beer :(
> Signed-off-by: Vincent Donnefort <vincent.donnefort@....com>
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 945d987246c5..8bf95b0e368d 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -6399,6 +6399,7 @@ static int select_idle_sibling(struct task_struct *p, int prev, int target)
> * pattern is IO completions.
> */
> if (is_per_cpu_kthread(current) &&
> + !is_idle_task(current) &&
> prev == smp_processor_id() &&
^^^^^^^^^^^^^^^^^^^^^^^^^^
(1)
> this_rq()->nr_running <= 1) {
So if we get to here, it means we failed
if ((available_idle_cpu(target) || sched_idle_cpu(target)) &&
asym_fits_capacity(task_util, target))
return target;
AFAICT (1) implies "prev == target" (target can be either prev or the
waking CPU), so per the above this implies prev isn't idle. If current is
the idle task, we can still have stuff enqueued (which matches nr_running
<= 1) and be on our way to schedule_idle(), or have rq->ttwu_pending (per
idle_cpu()) - IOW matching against the idle task here can lead to undesired
coscheduling.
If the above isn't bonkers:
Reviewed-by: Valentin Schneider <valentin.schneider@....com>
> return prev;
> --
> 2.25.1
Powered by blists - more mailing lists