[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20251028090324.GQ4068168@noisy.programming.kicks-ass.net>
Date: Tue, 28 Oct 2025 10:03:24 +0100
From: Peter Zijlstra <peterz@...radead.org>
To: kernel test robot <oliver.sang@...el.com>, japo@...ux.ibm.com
Cc: oe-lkp@...ts.linux.dev, lkp@...el.com, linux-kernel@...r.kernel.org,
x86@...nel.org, Juri Lelli <juri.lelli@...hat.com>,
Tejun Heo <tj@...nel.org>,
Vincent Guittot <vincent.guittot@...aro.org>,
cgroups@...r.kernel.org, aubrey.li@...ux.intel.com,
yu.c.chen@...el.com
Subject: Re: [tip:sched/core] [sched] b079d93796:
WARNING:possible_recursive_locking_detected_migration_is_trying_to_acquire_lock:at:set_cpus_allowed_force_but_task_is_already_holding_lock:at:cpu_stopper_thread
On Mon, Oct 27, 2025 at 12:01:33PM +0100, Peter Zijlstra wrote:
Could someone confirm this fixes the problem?
> ---
> Subject: sched: Fix the do_set_cpus_allowed() locking fix
>
> Commit abfc01077df6 ("sched: Fix do_set_cpus_allowed() locking")
> overlooked that __balance_push_cpu_stop() calls select_fallback_rq()
> with rq->lock held. This makes that set_cpus_allowed_force() will
> recursively take rq->lock and the machine locks up.
>
> Run select_fallback_rq() earlier, without holding rq->lock. This opens
> up a race window where a task could get migrated out from under us, but
> that is harmless, we want the task migrated.
>
> select_fallback_rq() itself will not be subject to concurrency as it
> will be fully serialized by p->pi_lock, so there is no chance of
> set_cpus_allowed_force() getting called with different arguments and
> selecting different fallback CPUs for one task.
>
> Fixes: abfc01077df6 ("sched: Fix do_set_cpus_allowed() locking")
> Reported-by: Jan Polensky <japo@...ux.ibm.com>
> Reported-by: kernel test robot <oliver.sang@...el.com>
> Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
> Closes: https://lore.kernel.org/oe-lkp/202510271206.24495a68-lkp@intel.com
> ---
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index 1842285eac1e..67b5f2faab36 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -8044,18 +8044,15 @@ static int __balance_push_cpu_stop(void *arg)
> struct rq_flags rf;
> int cpu;
>
> - raw_spin_lock_irq(&p->pi_lock);
> - rq_lock(rq, &rf);
> -
> - update_rq_clock(rq);
> -
> - if (task_rq(p) == rq && task_on_rq_queued(p)) {
> + scoped_guard (raw_spinlock_irq, &p->pi_lock) {
> cpu = select_fallback_rq(rq->cpu, p);
> - rq = __migrate_task(rq, &rf, p, cpu);
> - }
>
> - rq_unlock(rq, &rf);
> - raw_spin_unlock_irq(&p->pi_lock);
> + rq_lock(rq, &rf);
> + update_rq_clock(rq);
> + if (task_rq(p) == rq && task_on_rq_queued(p))
> + rq = __migrate_task(rq, &rf, p, cpu);
> + rq_unlock(rq, &rf);
> + }
>
> put_task_struct(p);
>
Powered by blists - more mailing lists