[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20221221173005.GB37362@lothringen>
Date: Wed, 21 Dec 2022 18:30:05 +0100
From: Frederic Weisbecker <frederic@...nel.org>
To: Boqun Feng <boqun.feng@...il.com>
Cc: Joel Fernandes <joel@...lfernandes.org>,
linux-kernel@...r.kernel.org,
Josh Triplett <josh@...htriplett.org>,
Lai Jiangshan <jiangshanlai@...il.com>,
Mathieu Desnoyers <mathieu.desnoyers@...icios.com>,
"Paul E. McKenney" <paulmck@...nel.org>, rcu@...r.kernel.org,
Steven Rostedt <rostedt@...dmis.org>
Subject: Re: [RFC 0/2] srcu: Remove pre-flip memory barrier
On Wed, Dec 21, 2022 at 08:02:28AM -0800, Boqun Feng wrote:
> On Wed, Dec 21, 2022 at 12:26:29PM +0100, Frederic Weisbecker wrote:
> > On Tue, Dec 20, 2022 at 09:41:17PM -0500, Joel Fernandes wrote:
> > >
> > >
> > > > On Dec 20, 2022, at 7:50 PM, Frederic Weisbecker <frederic@...nel.org> wrote:
> > > >
> > > > On Tue, Dec 20, 2022 at 07:15:00PM -0500, Joel Fernandes wrote:
> > > >> On Tue, Dec 20, 2022 at 5:45 PM Frederic Weisbecker <frederic@...nel.org> wrote:
> > > >> Agreed about (1).
> > > >>
> > > >>> _ In (2), E pairs with the address-dependency between idx and lock_count.
> > > >>
> > > >> But that is not the only reason. If that was the only reason for (2),
> > > >> then there is an smp_mb() just before the next-scan post-flip before
> > > >> the lock counts are read.
> > > >
> > > > The post-flip barrier makes sure the new idx is visible on the next READER's
> > > > turn, but it doesn't protect against the fact that "READ idx then WRITE lock[idx]"
> > > > may appear unordered from the update side POV if there is no barrier between the
> > > > scan and the flip.
> > > >
> > > > If you remove the smp_mb() from the litmus test I sent, things explode.
> > >
> > > Sure I see what you are saying and it’s a valid point as well. However why do you need memory barrier D (labeled such in the kernel code) for that? You already have a memory barrier A before the lock count is read. That will suffice for the ordering pairing with the addr dependency.
> > > In other words, if updater sees readers lock counts, then reader would be making those lock count updates on post-flip inactive index, not the one being scanned as you wanted, and you will accomplish that just with the mem barrier A.
> > >
> > > So D fixes the above issue you are talking about (lock count update), however that is already fixed by the memory barrier A. But you still need D for the issue I mentioned (unlock counts vs flip).
> > >
> > > That’s just my opinion and let’s discuss more because I cannot rule out that I
> > > am missing something with this complicated topic ;-)
> >
> > I must be missing something. I often do.
> >
> > Ok let's put that on litmus:
> >
> > ----
> > C srcu
> >
> > {}
> >
> > // updater
> > P0(int *IDX, int *LOCK0, int *UNLOCK0, int *LOCK1, int *UNLOCK1)
> > {
> > int lock1;
> > int unlock1;
> > int lock0;
> > int unlock0;
> >
> > // SCAN1
> > unlock1 = READ_ONCE(*UNLOCK1);
> > smp_mb(); // A
> > lock1 = READ_ONCE(*LOCK1);
> >
> > // FLIP
> > smp_mb(); // E
>
> In real code there is a control dependency between the READ_ONCE() above
> and the WRITE_ONCE() before, i.e. only flip the idx when lock1 ==
> unlock1, maybe try with the P0 below? Untested due to not having herd on
> this computer ;-)
>
> > WRITE_ONCE(*IDX, 1);
> > smp_mb(); // D
> >
> > // SCAN2
> > unlock0 = READ_ONCE(*UNLOCK0);
> > smp_mb(); // A
> > lock0 = READ_ONCE(*LOCK0);
> > }
> >
> P0(int *IDX, int *LOCK0, int *UNLOCK0, int *LOCK1, int *UNLOCK1)
> {
> int lock1;
> int unlock1;
> int lock0;
> int unlock0;
>
> // SCAN1
> unlock1 = READ_ONCE(*UNLOCK1);
> smp_mb(); // A
> lock1 = READ_ONCE(*LOCK1);
>
> // FLIP
> if (unlock1 == lock1) {
> smp_mb(); // E
> WRITE_ONCE(*IDX, 1);
> smp_mb(); // D
>
> // SCAN2
> unlock0 = READ_ONCE(*UNLOCK0);
> smp_mb(); // A
> lock0 = READ_ONCE(*LOCK0);
> }
> }
That becomes the below (same effect).
C D
{}
// updater
P0(int *IDX, int *LOCK0, int *UNLOCK0, int *LOCK1, int *UNLOCK1)
{
int lock1;
int unlock1;
int lock0;
int unlock0;
// SCAN1
unlock1 = READ_ONCE(*UNLOCK1);
smp_mb(); // A
lock1 = READ_ONCE(*LOCK1);
if (unlock1 == lock1) {
// FLIP
smp_mb(); // E
WRITE_ONCE(*IDX, 1);
smp_mb(); // D
// SCAN 2
unlock0 = READ_ONCE(*UNLOCK0);
smp_mb(); // A
lock0 = READ_ONCE(*LOCK0);
}
}
// reader
P1(int *IDX, int *LOCK0, int *UNLOCK0, int *LOCK1, int *UNLOCK1)
{
int tmp;
int idx;
// 1st reader
idx = READ_ONCE(*IDX);
if (idx == 0) {
tmp = READ_ONCE(*LOCK0);
WRITE_ONCE(*LOCK0, tmp + 1);
smp_mb(); /* B and C */
tmp = READ_ONCE(*UNLOCK0);
WRITE_ONCE(*UNLOCK0, tmp + 1);
} else {
tmp = READ_ONCE(*LOCK1);
WRITE_ONCE(*LOCK1, tmp + 1);
smp_mb(); /* B and C */
tmp = READ_ONCE(*UNLOCK1);
WRITE_ONCE(*UNLOCK1, tmp + 1);
}
// second reader
idx = READ_ONCE(*IDX);
if (idx == 0) {
tmp = READ_ONCE(*LOCK0);
WRITE_ONCE(*LOCK0, tmp + 1);
smp_mb(); /* B and C */
tmp = READ_ONCE(*UNLOCK0);
WRITE_ONCE(*UNLOCK0, tmp + 1);
} else {
tmp = READ_ONCE(*LOCK1);
WRITE_ONCE(*LOCK1, tmp + 1);
smp_mb(); /* B and C */
tmp = READ_ONCE(*UNLOCK1);
WRITE_ONCE(*UNLOCK1, tmp + 1);
}
// third reader
idx = READ_ONCE(*IDX);
if (idx == 0) {
tmp = READ_ONCE(*LOCK0);
WRITE_ONCE(*LOCK0, tmp + 1);
smp_mb(); /* B and C */
tmp = READ_ONCE(*UNLOCK0);
WRITE_ONCE(*UNLOCK0, tmp + 1);
} else {
tmp = READ_ONCE(*LOCK1);
WRITE_ONCE(*LOCK1, tmp + 1);
smp_mb(); /* B and C */
tmp = READ_ONCE(*UNLOCK1);
WRITE_ONCE(*UNLOCK1, tmp + 1);
}
}
exists (0:unlock0=0 /\ 1:idx=0)
Powered by blists - more mailing lists