[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20251008102722.GT3419281@noisy.programming.kicks-ass.net>
Date: Wed, 8 Oct 2025 12:27:22 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: John Stultz <jstultz@...gle.com>
Cc: LKML <linux-kernel@...r.kernel.org>,
K Prateek Nayak <kprateek.nayak@....com>,
Joel Fernandes <joelagnelf@...dia.com>,
Qais Yousef <qyousef@...alina.io>, Ingo Molnar <mingo@...hat.com>,
Juri Lelli <juri.lelli@...hat.com>,
Vincent Guittot <vincent.guittot@...aro.org>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Valentin Schneider <vschneid@...hat.com>,
Steven Rostedt <rostedt@...dmis.org>,
Ben Segall <bsegall@...gle.com>,
Zimuzo Ezeozue <zezeozue@...gle.com>, Mel Gorman <mgorman@...e.de>,
Will Deacon <will@...nel.org>, Waiman Long <longman@...hat.com>,
Boqun Feng <boqun.feng@...il.com>,
"Paul E. McKenney" <paulmck@...nel.org>,
Metin Kaya <Metin.Kaya@....com>,
Xuewen Yan <xuewen.yan94@...il.com>,
Thomas Gleixner <tglx@...utronix.de>,
Daniel Lezcano <daniel.lezcano@...aro.org>,
Suleiman Souhlal <suleiman@...gle.com>,
kuyo chang <kuyo.chang@...iatek.com>, hupu <hupu.gm@...il.com>,
kernel-team@...roid.com
Subject: Re: [PATCH v22 1/6] locking: Add task::blocked_lock to serialize
blocked_on state
On Fri, Sep 26, 2025 at 03:29:09AM +0000, John Stultz wrote:
> So far, we have been able to utilize the mutex::wait_lock
> for serializing the blocked_on state, but when we move to
> proxying across runqueues, we will need to add more state
> and a way to serialize changes to this state in contexts
> where we don't hold the mutex::wait_lock.
>
> So introduce the task::blocked_lock, which nests under the
> mutex::wait_lock in the locking order, and rework the locking
> to use it.
>
> Signed-off-by: John Stultz <jstultz@...gle.com>
> Reviewed-by: K Prateek Nayak <kprateek.nayak@....com>
> ---
> include/linux/sched.h | 52 +++++++++++++++---------------------
> init/init_task.c | 1 +
> kernel/fork.c | 1 +
> kernel/locking/mutex-debug.c | 4 +--
> kernel/locking/mutex.c | 40 +++++++++++++++++----------
> kernel/locking/ww_mutex.h | 4 +--
> kernel/sched/core.c | 4 ++-
> 7 files changed, 57 insertions(+), 49 deletions(-)
>
> diff --git a/include/linux/sched.h b/include/linux/sched.h
> index e4ce0a76831e5..cb4e81d9d9b67 100644
> --- a/include/linux/sched.h
> +++ b/include/linux/sched.h
> +static inline struct mutex *get_task_blocked_on(struct task_struct *p)
> +{
> + guard(raw_spinlock_irqsave)(&p->blocked_lock);
> + return __get_task_blocked_on(p);
> }
This isn't a safe function in general; nothing guarantees the value
returned is stable. Perhaps move it inside kernel/locking/mutex.h, its
only users (below) are mutex debug code after all.
> diff --git a/kernel/locking/mutex-debug.c b/kernel/locking/mutex-debug.c
> index 949103fd8e9b5..1d8cff71f65e1 100644
> --- a/kernel/locking/mutex-debug.c
> +++ b/kernel/locking/mutex-debug.c
> @@ -54,13 +54,13 @@ void debug_mutex_add_waiter(struct mutex *lock, struct mutex_waiter *waiter,
> lockdep_assert_held(&lock->wait_lock);
>
> /* Current thread can't be already blocked (since it's executing!) */
> - DEBUG_LOCKS_WARN_ON(__get_task_blocked_on(task));
> + DEBUG_LOCKS_WARN_ON(get_task_blocked_on(task));
> }
>
> void debug_mutex_remove_waiter(struct mutex *lock, struct mutex_waiter *waiter,
> struct task_struct *task)
> {
> - struct mutex *blocked_on = __get_task_blocked_on(task);
> + struct mutex *blocked_on = get_task_blocked_on(task);
>
> DEBUG_LOCKS_WARN_ON(list_empty(&waiter->list));
> DEBUG_LOCKS_WARN_ON(waiter->task != task);
> diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c
> index de7d6702cd96c..c44fc63d4476e 100644
> --- a/kernel/locking/mutex.c
> +++ b/kernel/locking/mutex.c
> @@ -740,11 +752,11 @@ __mutex_lock_common(struct mutex *lock, unsigned int state, unsigned int subclas
> return 0;
>
> err:
> - __clear_task_blocked_on(current, lock);
> + clear_task_blocked_on(current, lock);
> __set_current_state(TASK_RUNNING);
> __mutex_remove_waiter(lock, &waiter);
> err_early_kill:
> - WARN_ON(__get_task_blocked_on(current));
> + WARN_ON(get_task_blocked_on(current));
> trace_contention_end(lock, ret);
> raw_spin_unlock_irqrestore_wake(&lock->wait_lock, flags, &wake_q);
> debug_mutex_free_waiter(&waiter);
Powered by blists - more mailing lists