[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20221216155803.GB8949@willie-the-truck>
Date: Fri, 16 Dec 2022 15:58:04 +0000
From: Will Deacon <will@...nel.org>
To: Mel Gorman <mgorman@...hsingularity.net>
Cc: Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
Peter Zijlstra <peterz@...radead.org>, Jan Kara <jack@...e.cz>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>,
Waiman Long <longman@...hat.com>,
Boqun Feng <boqun.feng@...il.com>,
Pierre Gondois <pierre.gondois@....com>,
Steven Rostedt <rostedt@...dmis.org>,
Catalin Marinas <catalin.marinas@....com>,
Davidlohr Bueso <dave@...olabs.net>,
LKML <linux-kernel@...r.kernel.org>,
Linux-RT <linux-rt-users@...r.kernel.org>
Subject: Re: [PATCH] rtmutex: Add acquire semantics for rtmutex lock
acquisition
Hey Mel,
On Fri, Dec 16, 2022 at 01:55:48PM +0000, Mel Gorman wrote:
> On Fri, Dec 16, 2022 at 11:14:12AM +0000, Will Deacon wrote:
> > > diff --git a/kernel/locking/rtmutex.c b/kernel/locking/rtmutex.c
> > > index 35212f260148..af0dbe4d5e97 100644
> > > --- a/kernel/locking/rtmutex.c
> > > +++ b/kernel/locking/rtmutex.c
> > > @@ -238,6 +238,13 @@ static __always_inline void mark_rt_mutex_waiters(struct rt_mutex_base *lock)
> > > owner = *p;
> > > } while (cmpxchg_relaxed(p, owner,
> > > owner | RT_MUTEX_HAS_WAITERS) != owner);
> > > +
> > > + /*
> > > + * The cmpxchg loop above is relaxed to avoid back-to-back ACQUIRE
> > > + * operations in the event of contention. Ensure the successful
> > > + * cmpxchg is visible.
> > > + */
> > > + smp_mb__after_atomic();
> >
> > Could we use smp_acquire__after_ctrl_dep() instead?
> >
>
> It's might be sufficient but it's not as obviously correct. It lacks an
> obvious pairing that is definitely correct but the control dependency
> combined with the smp_acquire__after_ctrl_dep *should* be sufficient
> against the lock fast path based on the available documentation. However,
> I was unable to convince myself that this is definitely correct on all CPUs.
I'm reasonably confident (insofar as you can be confident about any of
this) that it will work, and on arm64 it will result in a slightly cheaper
instruction being emitted. However, I just chimed in because you asked for
my opinion so I'm absolutely fine with whichever direction you prefer to
take!
> Given that arm64 was trivial to crash on PREEMPT_RT before the patch
> (hackbench pipes also triggers the same problem), I'm reluctant to try and
> be too clever particularly as I didn't have a reproducer for a problem in
> this specific path. If someone can demonstrate a reasonable case where
> smp_mb__after_atomic() is too heavy and document that it can be safely
> relaxed then great. At least if that relaxing is wrong, there will be a
> bisection point between the fix and the reintroduction of damage.
I guess bigeasy can give the weaker barrier a try if he has the time, but
otherwise we can leave the change as-is.
Cheers,
Will
Powered by blists - more mailing lists