[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAEXW_YRe=h0tuRyp=2N1mB9ytsiFLL6U4UX=Od5PN-=7FwuDsg@mail.gmail.com>
Date: Wed, 21 Dec 2022 19:33:10 +0000
From: Joel Fernandes <joel@...lfernandes.org>
To: Frederic Weisbecker <frederic@...nel.org>
Cc: Boqun Feng <boqun.feng@...il.com>, linux-kernel@...r.kernel.org,
Josh Triplett <josh@...htriplett.org>,
Lai Jiangshan <jiangshanlai@...il.com>,
Mathieu Desnoyers <mathieu.desnoyers@...icios.com>,
"Paul E. McKenney" <paulmck@...nel.org>, rcu@...r.kernel.org,
Steven Rostedt <rostedt@...dmis.org>
Subject: Re: [RFC 0/2] srcu: Remove pre-flip memory barrier
Ah Frederic, I think you nailed it. E is required to order the flip
write with the control-dependency on the READ side. I can confirm the
below test with bad condition shows the previous reader sees the
post-flip index when it shouldn't have. Please see below modifications
to your Litmus test.
I think we should document it in the code that E pairs with the
control-dependency between idx read and lock count write.
C srcu
{}
// updater
P0(int *IDX, int *LOCK0, int *UNLOCK0, int *LOCK1, int *UNLOCK1)
{
int lock1;
int unlock1;
int lock0;
int unlock0;
// SCAN1
unlock1 = READ_ONCE(*UNLOCK1);
smp_mb(); // A
lock1 = READ_ONCE(*LOCK1);
// FLIP
smp_mb(); // E -------------------- required to make the bad
condition not happen.
WRITE_ONCE(*IDX, 1);
smp_mb(); // D
// SCAN2
unlock0 = READ_ONCE(*UNLOCK0);
smp_mb(); // A
lock0 = READ_ONCE(*LOCK0);
}
// reader
P1(int *IDX, int *LOCK0, int *UNLOCK0, int *LOCK1, int *UNLOCK1)
{
int tmp;
int idx1;
int idx2;
// 1st reader
idx1 = READ_ONCE(*IDX);
if (idx1 == 0) {
tmp = READ_ONCE(*LOCK0);
WRITE_ONCE(*LOCK0, tmp + 1);
smp_mb(); /* B and C */
tmp = READ_ONCE(*UNLOCK0);
WRITE_ONCE(*UNLOCK0, tmp + 1);
} else {
tmp = READ_ONCE(*LOCK1);
WRITE_ONCE(*LOCK1, tmp + 1);
smp_mb(); /* B and C */
tmp = READ_ONCE(*UNLOCK1);
WRITE_ONCE(*UNLOCK1, tmp + 1);
}
// second reader
idx2 = READ_ONCE(*IDX);
if (idx2 == 0) {
tmp = READ_ONCE(*LOCK0);
WRITE_ONCE(*LOCK0, tmp + 1);
smp_mb(); /* B and C */
tmp = READ_ONCE(*UNLOCK0);
WRITE_ONCE(*UNLOCK0, tmp + 1);
} else {
tmp = READ_ONCE(*LOCK1);
WRITE_ONCE(*LOCK1, tmp + 1);
smp_mb(); /* B and C */
tmp = READ_ONCE(*UNLOCK1);
WRITE_ONCE(*UNLOCK1, tmp + 1);
}
}
exists (0:lock1=1 /\ 1:idx1=1 /\ 1:idx2=1 ) (* bad condition: 1st
reader saw flip *)
On Wed, Dec 21, 2022 at 5:30 PM Frederic Weisbecker <frederic@...nel.org> wrote:
>
> On Wed, Dec 21, 2022 at 08:02:28AM -0800, Boqun Feng wrote:
> > On Wed, Dec 21, 2022 at 12:26:29PM +0100, Frederic Weisbecker wrote:
> > > On Tue, Dec 20, 2022 at 09:41:17PM -0500, Joel Fernandes wrote:
> > > >
> > > >
> > > > > On Dec 20, 2022, at 7:50 PM, Frederic Weisbecker <frederic@...nel.org> wrote:
> > > > >
> > > > > On Tue, Dec 20, 2022 at 07:15:00PM -0500, Joel Fernandes wrote:
> > > > >> On Tue, Dec 20, 2022 at 5:45 PM Frederic Weisbecker <frederic@...nel.org> wrote:
> > > > >> Agreed about (1).
> > > > >>
> > > > >>> _ In (2), E pairs with the address-dependency between idx and lock_count.
> > > > >>
> > > > >> But that is not the only reason. If that was the only reason for (2),
> > > > >> then there is an smp_mb() just before the next-scan post-flip before
> > > > >> the lock counts are read.
> > > > >
> > > > > The post-flip barrier makes sure the new idx is visible on the next READER's
> > > > > turn, but it doesn't protect against the fact that "READ idx then WRITE lock[idx]"
> > > > > may appear unordered from the update side POV if there is no barrier between the
> > > > > scan and the flip.
> > > > >
> > > > > If you remove the smp_mb() from the litmus test I sent, things explode.
> > > >
> > > > Sure I see what you are saying and it’s a valid point as well. However why do you need memory barrier D (labeled such in the kernel code) for that? You already have a memory barrier A before the lock count is read. That will suffice for the ordering pairing with the addr dependency.
> > > > In other words, if updater sees readers lock counts, then reader would be making those lock count updates on post-flip inactive index, not the one being scanned as you wanted, and you will accomplish that just with the mem barrier A.
> > > >
> > > > So D fixes the above issue you are talking about (lock count update), however that is already fixed by the memory barrier A. But you still need D for the issue I mentioned (unlock counts vs flip).
> > > >
> > > > That’s just my opinion and let’s discuss more because I cannot rule out that I
> > > > am missing something with this complicated topic ;-)
> > >
> > > I must be missing something. I often do.
> > >
> > > Ok let's put that on litmus:
> > >
> > > ----
> > > C srcu
> > >
> > > {}
> > >
> > > // updater
> > > P0(int *IDX, int *LOCK0, int *UNLOCK0, int *LOCK1, int *UNLOCK1)
> > > {
> > > int lock1;
> > > int unlock1;
> > > int lock0;
> > > int unlock0;
> > >
> > > // SCAN1
> > > unlock1 = READ_ONCE(*UNLOCK1);
> > > smp_mb(); // A
> > > lock1 = READ_ONCE(*LOCK1);
> > >
> > > // FLIP
> > > smp_mb(); // E
> >
> > In real code there is a control dependency between the READ_ONCE() above
> > and the WRITE_ONCE() before, i.e. only flip the idx when lock1 ==
> > unlock1, maybe try with the P0 below? Untested due to not having herd on
> > this computer ;-)
> >
> > > WRITE_ONCE(*IDX, 1);
> > > smp_mb(); // D
> > >
> > > // SCAN2
> > > unlock0 = READ_ONCE(*UNLOCK0);
> > > smp_mb(); // A
> > > lock0 = READ_ONCE(*LOCK0);
> > > }
> > >
> > P0(int *IDX, int *LOCK0, int *UNLOCK0, int *LOCK1, int *UNLOCK1)
> > {
> > int lock1;
> > int unlock1;
> > int lock0;
> > int unlock0;
> >
> > // SCAN1
> > unlock1 = READ_ONCE(*UNLOCK1);
> > smp_mb(); // A
> > lock1 = READ_ONCE(*LOCK1);
> >
> > // FLIP
> > if (unlock1 == lock1) {
> > smp_mb(); // E
> > WRITE_ONCE(*IDX, 1);
> > smp_mb(); // D
> >
> > // SCAN2
> > unlock0 = READ_ONCE(*UNLOCK0);
> > smp_mb(); // A
> > lock0 = READ_ONCE(*LOCK0);
> > }
> > }
>
> That becomes the below (same effect).
>
> C D
>
> {}
>
> // updater
> P0(int *IDX, int *LOCK0, int *UNLOCK0, int *LOCK1, int *UNLOCK1)
> {
> int lock1;
> int unlock1;
> int lock0;
> int unlock0;
>
> // SCAN1
> unlock1 = READ_ONCE(*UNLOCK1);
> smp_mb(); // A
> lock1 = READ_ONCE(*LOCK1);
>
> if (unlock1 == lock1) {
> // FLIP
> smp_mb(); // E
> WRITE_ONCE(*IDX, 1);
> smp_mb(); // D
>
> // SCAN 2
> unlock0 = READ_ONCE(*UNLOCK0);
> smp_mb(); // A
> lock0 = READ_ONCE(*LOCK0);
> }
> }
>
> // reader
> P1(int *IDX, int *LOCK0, int *UNLOCK0, int *LOCK1, int *UNLOCK1)
> {
> int tmp;
> int idx;
>
> // 1st reader
> idx = READ_ONCE(*IDX);
> if (idx == 0) {
> tmp = READ_ONCE(*LOCK0);
> WRITE_ONCE(*LOCK0, tmp + 1);
> smp_mb(); /* B and C */
> tmp = READ_ONCE(*UNLOCK0);
> WRITE_ONCE(*UNLOCK0, tmp + 1);
> } else {
> tmp = READ_ONCE(*LOCK1);
> WRITE_ONCE(*LOCK1, tmp + 1);
> smp_mb(); /* B and C */
> tmp = READ_ONCE(*UNLOCK1);
> WRITE_ONCE(*UNLOCK1, tmp + 1);
> }
>
> // second reader
> idx = READ_ONCE(*IDX);
> if (idx == 0) {
> tmp = READ_ONCE(*LOCK0);
> WRITE_ONCE(*LOCK0, tmp + 1);
> smp_mb(); /* B and C */
> tmp = READ_ONCE(*UNLOCK0);
> WRITE_ONCE(*UNLOCK0, tmp + 1);
> } else {
> tmp = READ_ONCE(*LOCK1);
> WRITE_ONCE(*LOCK1, tmp + 1);
> smp_mb(); /* B and C */
> tmp = READ_ONCE(*UNLOCK1);
> WRITE_ONCE(*UNLOCK1, tmp + 1);
> }
>
> // third reader
> idx = READ_ONCE(*IDX);
> if (idx == 0) {
> tmp = READ_ONCE(*LOCK0);
> WRITE_ONCE(*LOCK0, tmp + 1);
> smp_mb(); /* B and C */
> tmp = READ_ONCE(*UNLOCK0);
> WRITE_ONCE(*UNLOCK0, tmp + 1);
> } else {
> tmp = READ_ONCE(*LOCK1);
> WRITE_ONCE(*LOCK1, tmp + 1);
> smp_mb(); /* B and C */
> tmp = READ_ONCE(*UNLOCK1);
> WRITE_ONCE(*UNLOCK1, tmp + 1);
> }
> }
>
> exists (0:unlock0=0 /\ 1:idx=0)
>
Powered by blists - more mailing lists