[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YTI2UjKy+C7LeIf+@boqun-archlinux>
Date: Fri, 3 Sep 2021 22:50:58 +0800
From: Boqun Feng <boqun.feng@...il.com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...nel.org>,
Juri Lelli <juri.lelli@...hat.com>,
Steven Rostedt <rostedt@...dmis.org>,
Davidlohr Bueso <dave@...olabs.net>,
Will Deacon <will@...nel.org>,
Waiman Long <longman@...hat.com>,
Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
Mike Galbraith <efault@....de>,
Daniel Bristot de Oliveira <bristot@...hat.com>,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: [RFC] locking: rwbase: Take care of ordering guarantee for
fastpath reader
On Thu, Sep 02, 2021 at 01:55:29PM +0200, Peter Zijlstra wrote:
[...]
> > raw_spin_unlock_irqrestore(&rtm->wait_lock, flags);
> > rwbase_rtmutex_unlock(rtm);
> > }
> > @@ -216,8 +229,14 @@ static int __sched rwbase_write_lock(struct rwbase_rt *rwb,
> > */
> > rwbase_set_and_save_current_state(state);
> >
> > - /* Block until all readers have left the critical section. */
> > - for (; atomic_read(&rwb->readers);) {
> > + /*
> > + * Block until all readers have left the critical section.
> > + *
> > + * _acqurie() is needed in case that the reader side runs in the fast
> > + * path, pairing with the atomic_dec_and_test() in rwbase_read_unlock(),
> > + * provides ACQUIRE.
> > + */
> > + for (; atomic_read_acquire(&rwb->readers);) {
> > /* Optimized out for rwlocks */
> > if (rwbase_signal_pending_state(state, current)) {
> > __set_current_state(TASK_RUNNING);
>
> I think we can restructure things to avoid this one, but yes. Suppose we
> do:
>
> readers = atomic_sub_return_relaxed(READER_BIAS, &rwb->readers);
>
> /*
> * These two provide either an smp_mb() or an UNLOCK+LOCK
By "UNLOCK+LOCK", you mean unlock(->pi_lock) + lock(->wait_lock), right?
This may be unrelated, but in our memory model only unlock+lock pairs on
the same lock provide TSO-like ordering ;-) IOW, unlock(->pi_lock) +
lock(->wait_lock) on the same CPU doesn't order reads before and after.
Consider the following litmus:
C unlock-lock
{
}
P0(spinlock_t *s, spinlock_t *p, int *x, int *y)
{
int r1;
int r2;
spin_lock(s);
r1 = READ_ONCE(*x);
spin_unlock(s);
spin_lock(p);
r2 = READ_ONCE(*y);
spin_unlock(p);
}
P1(int *x, int *y)
{
WRITE_ONCE(*y, 1);
smp_wmb();
WRITE_ONCE(*x, 1);
}
exists (0:r1=1 /\ 0:r2=0)
herd result:
Test unlock-lock Allowed
States 4
0:r1=0; 0:r2=0;
0:r1=0; 0:r2=1;
0:r1=1; 0:r2=0;
0:r1=1; 0:r2=1;
Ok
Witnesses
Positive: 1 Negative: 3
Condition exists (0:r1=1 /\ 0:r2=0)
Observation unlock-lock Sometimes 1 3
Time unlock-lock 0.01
Hash=a8b772fd25f963f73a0d8e70e36ee255
> * ordering, either is strong enough to provide ACQUIRE order
> * for the above load of @readers.
> */
> rwbase_set_and_save_current_state(state);
> raw_spin_lock_irqsave(&rtm->wait_lock, flags);
>
> while (readers) {
> ...
> readers = atomic_read(&rwb->readers);
The above should be _acquire(), right? Pairs with the last reader
exiting the critical section and dec ->readers to 0. If so, it
undermines the necessity of the restructure?
Regards,
Boqun
> if (readers)
> rwbase_schedule();
> ...
> }
>
>
> > @@ -229,6 +248,9 @@ static int __sched rwbase_write_lock(struct rwbase_rt *rwb,
> > /*
> > * Schedule and wait for the readers to leave the critical
> > * section. The last reader leaving it wakes the waiter.
> > + *
> > + * _acquire() is not needed, because we can rely on the smp_mb()
> > + * in set_current_state() to provide ACQUIRE.
> > */
> > if (atomic_read(&rwb->readers) != 0)
> > rwbase_schedule();
> > @@ -253,7 +275,11 @@ static inline int rwbase_write_trylock(struct rwbase_rt *rwb)
> > atomic_sub(READER_BIAS, &rwb->readers);
> >
> > raw_spin_lock_irqsave(&rtm->wait_lock, flags);
> > - if (!atomic_read(&rwb->readers)) {
> > + /*
> > + * _acquire() is needed in case reader is in the fast path, pairing with
> > + * rwbase_read_unlock(), provides ACQUIRE.
> > + */
> > + if (!atomic_read_acquire(&rwb->readers)) {
>
> Moo; the alternative is using dec_and_lock instead of dec_and_test, but
> that's not going to be worth it.
>
> > atomic_set(&rwb->readers, WRITER_BIAS);
> > raw_spin_unlock_irqrestore(&rtm->wait_lock, flags);
> > return 1;
> > --
> > 2.32.0
> >
Powered by blists - more mailing lists