[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20241217083946.GF35539@noisy.programming.kicks-ass.net>
Date: Tue, 17 Dec 2024 09:39:46 +0100
From: Peter Zijlstra <peterz@...radead.org>
To: John Stultz <jstultz@...gle.com>
Cc: LKML <linux-kernel@...r.kernel.org>, Joel Fernandes <joelaf@...gle.com>,
Qais Yousef <qyousef@...alina.io>, Ingo Molnar <mingo@...hat.com>,
Juri Lelli <juri.lelli@...hat.com>,
Vincent Guittot <vincent.guittot@...aro.org>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Valentin Schneider <vschneid@...hat.com>,
Steven Rostedt <rostedt@...dmis.org>,
Ben Segall <bsegall@...gle.com>,
Zimuzo Ezeozue <zezeozue@...gle.com>, Mel Gorman <mgorman@...e.de>,
Will Deacon <will@...nel.org>, Waiman Long <longman@...hat.com>,
Boqun Feng <boqun.feng@...il.com>,
"Paul E. McKenney" <paulmck@...nel.org>,
Metin Kaya <Metin.Kaya@....com>,
Xuewen Yan <xuewen.yan94@...il.com>,
K Prateek Nayak <kprateek.nayak@....com>,
Thomas Gleixner <tglx@...utronix.de>,
Daniel Lezcano <daniel.lezcano@...aro.org>, kernel-team@...roid.com,
Connor O'Brien <connoro@...gle.com>
Subject: Re: [RFC][PATCH v14 2/7] locking/mutex: Rework
task_struct::blocked_on
On Mon, Dec 16, 2024 at 09:01:24PM -0800, John Stultz wrote:
> On Mon, Dec 16, 2024 at 8:54 AM Peter Zijlstra <peterz@...radead.org> wrote:
> > On Fri, Dec 13, 2024 at 07:39:57PM -0800, John Stultz wrote:
> > > On Fri, Dec 13, 2024 at 3:22 PM Peter Zijlstra <peterz@...radead.org> wrote:
> > > > On Mon, Nov 25, 2024 at 11:51:56AM -0800, John Stultz wrote:
> > > So yes, the description can use improvement here. I at one time had
> > > 3-4 separate very fine grained patches (see the top 4 patches here:
> > > https://github.com/johnstultz-work/linux-dev/commits/proxy-exec-v7-6.7-rc6-fine-grained/?after=c4cad6e353c00254a2dfbb227ef81d8c3827427d+35)
> > > that I rolled into one when sending out(mostly to avoid overwhelming
> > > folks), but the squished commit description isn't as clear.
> > > So if it's helpful I can split this back out?
> > >
> > > I'll also add some better comments as well.
> >
> > Not sure yet about splitting back out -- let me try and figure out what
> > all is actually done / needed.
> >
> > So blocked_lock started out as another lock around ttwu(), in order to
> > serialize the task wakeup vs reading a remote ->blocked_on relation.
>
> I think of it primarily to serialize the task->blocked* state (there
> gets to be quite a bit by the end of the proxy series).
>
> > Since we do this with rq->lock held, it can't be ->pi_lock, and hence
> > ->blocked_lock was born.
>
> Yeah, we needed to use something other than the task->pi_lock to
> serialize it as it has to nest under the mutex->wait_lock.
No, the critical bit is nesting under rq->lock -- we need to be able to
walk the blocked relation in the middle of schedule(). You can equally
wrap blocked_lock outside of wait_lock, that doesn't really matter much.
> > Later patches appear to have moved it into mutex, mirroring the
> > ->wait_lock -- this is probably better.
> >
> > /me goes chase that state thing for a bit..
>
> ? I'm not sure I followed this. The blocked_lock continues to
> serialize the task->blocked* state through the patch series.
Well, there was only ->blocked_on, and on UP you don't need
serialization beyond disabling preemption.
The tricky bit is SMP, then you need something to stabilize the blocked
relation.
Powered by blists - more mailing lists