[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170920211727.GA26082@andrea>
Date: Wed, 20 Sep 2017 23:17:27 +0200
From: Andrea Parri <parri.andrea@...il.com>
To: Davidlohr Bueso <dave@...olabs.net>
Cc: Prateek Sood <prsood@...eaurora.org>, mingo@...nel.org,
longman@...hat.com, peterz@...radead.org,
linux-kernel@...r.kernel.org, sramana@...eaurora.org
Subject: Re: [PATCH] rwsem: fix missed wakeup due to reordering of load
On Wed, Sep 20, 2017 at 07:52:54AM -0700, Davidlohr Bueso wrote:
> On Thu, 07 Sep 2017, Prateek Sood wrote:
> > /*
> >+ * __rwsem_down_write_failed_common(sem)
> >+ * rwsem_optimistic_spin(sem)
> >+ * osq_unlock(sem->osq)
> >+ * ...
> >+ * atomic_long_add_return(&sem->count)
> >+ *
> >+ * - VS -
> >+ *
> >+ * __up_write()
> >+ * if (atomic_long_sub_return_release(&sem->count) < 0)
> >+ * rwsem_wake(sem)
> >+ * osq_is_locked(&sem->osq)
> >+ *
> >+ * And __up_write() must observe !osq_is_locked() when it observes the
> >+ * atomic_long_add_return() in order to not miss a wakeup.
> >+ *
> >+ * This boils down to:
> >+ *
> >+ * [S.rel] X = 1 [RmW] r0 = (Y += 0)
> >+ * MB RMB
> >+ * [RmW] Y += 1 [L] r1 = X
> >+ *
> >+ * exists (r0=1 /\ r1=0)
> >+ */
> >+ smp_rmb();
>
> Instead, how about just removing the release from atomic_long_sub_return_release()
> such that the osq load is not hoisted over the atomic compound (along with Peter's
> comment):
This solution will actually enforce a stronger (full) ordering w.r.t. the
solution described by Prateek and Peter. Also, it will "trade" two lwsync
for two sync (powerpc), one dmb.ld for one dmb (arm64).
What are the reasons you would prefer this?
Andrea
>
> diff --git a/include/asm-generic/rwsem.h b/include/asm-generic/rwsem.h
> index 6c6a2141f271..487ce31078ff 100644
> --- a/include/asm-generic/rwsem.h
> +++ b/include/asm-generic/rwsem.h
> @@ -101,7 +101,7 @@ static inline void __up_read(struct rw_semaphore *sem)
> */
> static inline void __up_write(struct rw_semaphore *sem)
> {
> - if (unlikely(atomic_long_sub_return_release(RWSEM_ACTIVE_WRITE_BIAS,
> + if (unlikely(atomic_long_sub_return(RWSEM_ACTIVE_WRITE_BIAS,
> &sem->count) < 0))
> rwsem_wake(sem);
> }
Powered by blists - more mailing lists