[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20151203163725.GJ11337@arm.com>
Date: Thu, 3 Dec 2015 16:37:26 +0000
From: Will Deacon <will.deacon@....com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: mingo@...nel.org, oleg@...hat.com, linux-kernel@...r.kernel.org,
paulmck@...ux.vnet.ibm.com, boqun.feng@...il.com, corbet@....net,
mhocko@...nel.org, dhowells@...hat.com,
torvalds@...ux-foundation.org, waiman.long@....com, pjt@...gle.com
Subject: Re: [PATCH 3/4] locking: Introduce smp_cond_acquire()
Hi Peter,
On Thu, Dec 03, 2015 at 01:40:13PM +0100, Peter Zijlstra wrote:
> Introduce smp_cond_acquire() which combines a control dependency and a
> read barrier to form acquire semantics.
>
> This primitive has two benefits:
> - it documents control dependencies,
> - its typically cheaper than using smp_load_acquire() in a loop.
>
> Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
> ---
> include/linux/compiler.h | 17 +++++++++++++++++
> kernel/locking/qspinlock.c | 3 +--
> kernel/sched/core.c | 8 +-------
> kernel/sched/sched.h | 2 +-
> 4 files changed, 20 insertions(+), 10 deletions(-)
>
> --- a/include/linux/compiler.h
> +++ b/include/linux/compiler.h
> @@ -299,6 +299,23 @@ static __always_inline void __write_once
> __u.__val; \
> })
>
> +/**
> + * smp_cond_acquire() - Spin wait for cond with ACQUIRE ordering
> + * @cond: boolean expression to wait for
> + *
> + * Equivalent to using smp_load_acquire() on the condition variable but employs
> + * the control dependency of the wait to reduce the barrier on many platforms.
> + *
> + * The control dependency provides a LOAD->STORE order, the additional RMB
> + * provides LOAD->LOAD order, together they provide LOAD->{LOAD,STORE} order,
> + * aka. ACQUIRE.
> + */
> +#define smp_cond_acquire(cond) do { \
> + while (!(cond)) \
> + cpu_relax(); \
> + smp_rmb(); /* ctrl + rmb := acquire */ \
> +} while (0)
> +
> #endif /* __KERNEL__ */
>
> #endif /* __ASSEMBLY__ */
> --- a/kernel/locking/qspinlock.c
> +++ b/kernel/locking/qspinlock.c
> @@ -446,8 +446,7 @@ void queued_spin_lock_slowpath(struct qs
> if ((val = pv_wait_head_or_lock(lock, node)))
> goto locked;
>
> - while ((val = smp_load_acquire(&lock->val.counter)) & _Q_LOCKED_PENDING_MASK)
> - cpu_relax();
> + smp_cond_acquire(!((val = atomic_read(&lock->val)) & _Q_LOCKED_PENDING_MASK));
I think we spoke about this before, but what would work really well for
arm64 here is if we could override smp_cond_acquire in such a way that
the atomic_read could be performed explicitly in the macro. That would
allow us to use an LDXR to set the exclusive monitor, which in turn
means we can issue a WFE and get a cheap wakeup when lock->val is
actually modified.
With the current scheme, there's not enough information expressed in the
"cond" parameter to perform this optimisation.
Cheers,
Will
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists