[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20250106111021.GD20870@noisy.programming.kicks-ass.net>
Date: Mon, 6 Jan 2025 12:10:21 +0100
From: Peter Zijlstra <peterz@...radead.org>
To: Tianchen Ding <dtcccc@...ux.alibaba.com>
Cc: linux-kernel@...r.kernel.org, Ingo Molnar <mingo@...hat.com>,
Juri Lelli <juri.lelli@...hat.com>,
Vincent Guittot <vincent.guittot@...aro.org>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Steven Rostedt <rostedt@...dmis.org>,
Ben Segall <bsegall@...gle.com>, Mel Gorman <mgorman@...e.de>,
Valentin Schneider <vschneid@...hat.com>,
Marcelo Tosatti <mtosatti@...hat.com>,
Mike Galbraith <efault@....de>, Rik van Riel <riel@...hat.com>
Subject: Re: [PATCH] sched: Fix race between yield_to() and try_to_wake_up()
On Tue, Dec 31, 2024 at 01:50:20PM +0800, Tianchen Ding wrote:
> We met a SCHED_WARN in set_next_buddy():
> __warn_printk
> set_next_buddy
> yield_to_task_fair
> yield_to
> kvm_vcpu_yield_to [kvm]
> ...
>
> After a short dig, we found the rq_lock held by yield_to() may not
> be exactly the rq that the target task belongs to. There is a race
> window against try_to_wake_up().
>
> CPU0 target_task
>
> blocking on CPU1
> lock rq0 & rq1
> double check task_rq == p_rq, ok
> woken to CPU2 (lock task_pi & rq2)
> task_rq = rq2
> yield_to_task_fair (w/o lock rq2)
>
> In this race window, yield_to() is operating the task w/o the currect
> lock. Fix this by taking task pi_lock first.
>
> Fixes: d95f41220065 ("sched: Add yield_to(task, preempt) functionality")
> Signed-off-by: Tianchen Ding <dtcccc@...ux.alibaba.com>
> ---
> kernel/sched/syscalls.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/kernel/sched/syscalls.c b/kernel/sched/syscalls.c
> index ff0e5ab4e37c..943406c4ee86 100644
> --- a/kernel/sched/syscalls.c
> +++ b/kernel/sched/syscalls.c
> @@ -1433,7 +1433,7 @@ int __sched yield_to(struct task_struct *p, bool preempt)
> struct rq *rq, *p_rq;
> int yielded = 0;
>
> - scoped_guard (irqsave) {
> + scoped_guard (raw_spinlock_irqsave, &p->pi_lock) {
> rq = this_rq();
Thanks!
Powered by blists - more mailing lists