[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180131123859.GQ2269@hirez.programming.kicks-ass.net>
Date: Wed, 31 Jan 2018 13:38:59 +0100
From: Peter Zijlstra <peterz@...radead.org>
To: Will Deacon <will.deacon@....com>
Cc: linux-kernel@...r.kernel.org, Ingo Molnar <mingo@...hat.com>
Subject: Re: [PATCH] locking/qspinlock: Ensure node is initialised before
updating prev->next
On Wed, Jan 31, 2018 at 12:20:46PM +0000, Will Deacon wrote:
> diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
> index 294294c71ba4..1ebbc366a31d 100644
> --- a/kernel/locking/qspinlock.c
> +++ b/kernel/locking/qspinlock.c
> @@ -408,16 +408,15 @@ void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val)
> */
> if (old & _Q_TAIL_MASK) {
> prev = decode_tail(old);
> +
> /*
> - * The above xchg_tail() is also a load of @lock which generates,
> - * through decode_tail(), a pointer.
> - *
> - * The address dependency matches the RELEASE of xchg_tail()
> - * such that the access to @prev must happen after.
> + * We must ensure that the stores to @node are observed before
> + * the write to prev->next. The address dependency on xchg_tail
> + * is not sufficient to ensure this because the read component
> + * of xchg_tail is unordered with respect to the initialisation
> + * of node.
> */
> - smp_read_barrier_depends();
Right, except you're patching old code here, please try again on a tree
that includes commit:
548095dea63f ("locking: Remove smp_read_barrier_depends() from queued_spin_lock_slowpath()")
> -
> - WRITE_ONCE(prev->next, node);
> + smp_store_release(prev->next, node);
>
> pv_wait_node(node, prev);
> arch_mcs_spin_lock_contended(&node->locked);
> --
> 2.1.4
>
Powered by blists - more mailing lists