[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20210904101219.GA4323@worktop.programming.kicks-ass.net>
Date: Sat, 4 Sep 2021 12:12:19 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Boqun Feng <boqun.feng@...il.com>
Cc: Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...nel.org>,
Juri Lelli <juri.lelli@...hat.com>,
Steven Rostedt <rostedt@...dmis.org>,
Davidlohr Bueso <dave@...olabs.net>,
Will Deacon <will@...nel.org>,
Waiman Long <longman@...hat.com>,
Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
Mike Galbraith <efault@....de>,
Daniel Bristot de Oliveira <bristot@...hat.com>,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: [RFC] locking: rwbase: Take care of ordering guarantee for
fastpath reader
On Fri, Sep 03, 2021 at 10:50:58PM +0800, Boqun Feng wrote:
> On Thu, Sep 02, 2021 at 01:55:29PM +0200, Peter Zijlstra wrote:
> [...]
> > > raw_spin_unlock_irqrestore(&rtm->wait_lock, flags);
> > > rwbase_rtmutex_unlock(rtm);
> > > }
> > > @@ -216,8 +229,14 @@ static int __sched rwbase_write_lock(struct rwbase_rt *rwb,
> > > */
> > > rwbase_set_and_save_current_state(state);
> > >
> > > - /* Block until all readers have left the critical section. */
> > > - for (; atomic_read(&rwb->readers);) {
> > > + /*
> > > + * Block until all readers have left the critical section.
> > > + *
> > > + * _acqurie() is needed in case that the reader side runs in the fast
> > > + * path, pairing with the atomic_dec_and_test() in rwbase_read_unlock(),
> > > + * provides ACQUIRE.
> > > + */
> > > + for (; atomic_read_acquire(&rwb->readers);) {
> > > /* Optimized out for rwlocks */
> > > if (rwbase_signal_pending_state(state, current)) {
> > > __set_current_state(TASK_RUNNING);
> >
> > I think we can restructure things to avoid this one, but yes. Suppose we
> > do:
> >
> > readers = atomic_sub_return_relaxed(READER_BIAS, &rwb->readers);
> >
> > /*
> > * These two provide either an smp_mb() or an UNLOCK+LOCK
>
> By "UNLOCK+LOCK", you mean unlock(->pi_lock) + lock(->wait_lock), right?
> This may be unrelated, but in our memory model only unlock+lock pairs on
> the same lock provide TSO-like ordering ;-) IOW, unlock(->pi_lock) +
> lock(->wait_lock) on the same CPU doesn't order reads before and after.
Hurpm.. what actual hardware does that? PPC uses LWSYNC for
ACQUIRE/RELEASE, and ARM64 has RCsc RELEASE+ACQUIRE ordering.
Both should provide RC-TSO (or stronger) for UNLOCK-A + LOCK-B.
Powered by blists - more mailing lists