[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250318060917.GA26027@noisy.programming.kicks-ass.net>
Date: Tue, 18 Mar 2025 07:09:17 +0100
From: Peter Zijlstra <peterz@...radead.org>
To: John Stultz <jstultz@...gle.com>
Cc: LKML <linux-kernel@...r.kernel.org>,
Joel Fernandes <joelagnelf@...dia.com>,
Qais Yousef <qyousef@...alina.io>, Ingo Molnar <mingo@...hat.com>,
Juri Lelli <juri.lelli@...hat.com>,
Vincent Guittot <vincent.guittot@...aro.org>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Valentin Schneider <vschneid@...hat.com>,
Steven Rostedt <rostedt@...dmis.org>,
Ben Segall <bsegall@...gle.com>,
Zimuzo Ezeozue <zezeozue@...gle.com>, Mel Gorman <mgorman@...e.de>,
Will Deacon <will@...nel.org>, Waiman Long <longman@...hat.com>,
Boqun Feng <boqun.feng@...il.com>,
"Paul E. McKenney" <paulmck@...nel.org>,
Metin Kaya <Metin.Kaya@....com>,
Xuewen Yan <xuewen.yan94@...il.com>,
K Prateek Nayak <kprateek.nayak@....com>,
Thomas Gleixner <tglx@...utronix.de>,
Daniel Lezcano <daniel.lezcano@...aro.org>,
Suleiman Souhlal <suleiman@...gle.com>, kernel-team@...roid.com,
Valentin Schneider <valentin.schneider@....com>,
Connor O'Brien <connoro@...gle.com>
Subject: Re: [RFC PATCH v15 7/7] sched: Start blocked_on chain processing in
find_proxy_task()
On Mon, Mar 17, 2025 at 05:43:56PM +0100, Peter Zijlstra wrote:
> On Wed, Mar 12, 2025 at 03:11:37PM -0700, John Stultz wrote:
> > @@ -6668,47 +6676,138 @@ static bool proxy_deactivate(struct rq *rq, struct task_struct *donor)
> > }
> >
> > /*
> > + * Find runnable lock owner to proxy for mutex blocked donor
> > + *
> > + * Follow the blocked-on relation:
> > + * task->blocked_on -> mutex->owner -> task...
> > + *
> > + * Lock order:
> > + *
> > + * p->pi_lock
> > + * rq->lock
> > + * mutex->wait_lock
> > + *
> > + * Returns the task that is going to be used as execution context (the one
> > + * that is actually going to be run on cpu_of(rq)).
> > */
> > static struct task_struct *
> > find_proxy_task(struct rq *rq, struct task_struct *donor, struct rq_flags *rf)
> > {
> > + struct task_struct *owner = NULL;
> > + struct task_struct *ret = NULL;
> > + int this_cpu = cpu_of(rq);
> > + struct task_struct *p;
> > struct mutex *mutex;
> >
> > + /* Follow blocked_on chain. */
> > + for (p = donor; task_is_blocked(p); p = owner) {
> > + mutex = p->blocked_on;
> > + /* Something changed in the chain, so pick again */
> > + if (!mutex)
> > + return NULL;
> > /*
> > + * By taking mutex->wait_lock we hold off concurrent mutex_unlock()
> > + * and ensure @owner sticks around.
> > */
> > + raw_spin_lock(&mutex->wait_lock);
>
> This comment -- that is only true if you kill __mutex_unlock_fast(),
> which I don't think you did in the previous patches.
Ignore this; I got myself confused again. :-)
Powered by blists - more mailing lists