[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180131141140.GA9450@andrea>
Date: Wed, 31 Jan 2018 15:11:40 +0100
From: Andrea Parri <parri.andrea@...il.com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Will Deacon <will.deacon@....com>, linux-kernel@...r.kernel.org,
Ingo Molnar <mingo@...hat.com>
Subject: Re: [PATCH] locking/qspinlock: Ensure node is initialised before
updating prev->next
On Wed, Jan 31, 2018 at 01:38:59PM +0100, Peter Zijlstra wrote:
> On Wed, Jan 31, 2018 at 12:20:46PM +0000, Will Deacon wrote:
> > diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
> > index 294294c71ba4..1ebbc366a31d 100644
> > --- a/kernel/locking/qspinlock.c
> > +++ b/kernel/locking/qspinlock.c
> > @@ -408,16 +408,15 @@ void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val)
> > */
> > if (old & _Q_TAIL_MASK) {
> > prev = decode_tail(old);
> > +
> > /*
> > - * The above xchg_tail() is also a load of @lock which generates,
> > - * through decode_tail(), a pointer.
> > - *
> > - * The address dependency matches the RELEASE of xchg_tail()
> > - * such that the access to @prev must happen after.
> > + * We must ensure that the stores to @node are observed before
> > + * the write to prev->next. The address dependency on xchg_tail
> > + * is not sufficient to ensure this because the read component
> > + * of xchg_tail is unordered with respect to the initialisation
> > + * of node.
> > */
> > - smp_read_barrier_depends();
>
> Right, except you're patching old code here, please try again on a tree
> that includes commit:
>
> 548095dea63f ("locking: Remove smp_read_barrier_depends() from queued_spin_lock_slowpath()")
BTW, which loads was/is the smp_read_barrier_depends() supposed to order? ;)
I was somehow guessing that this barrier was/is there to "order" the load
from xchg_tail() with the address-dependent loads from pv_wait_node(); is
this true? (Does Will's patch really remove the reliance on the barrier?)
Andrea
>
> > -
> > - WRITE_ONCE(prev->next, node);
> > + smp_store_release(prev->next, node);
> >
> > pv_wait_node(node, prev);
> > arch_mcs_spin_lock_contended(&node->locked);
> > --
> > 2.1.4
> >
Powered by blists - more mailing lists