[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20181105224654.GA25864@brain-police>
Date: Mon, 5 Nov 2018 22:49:21 +0000
From: Will Deacon <will.deacon@....com>
To: Gao Xiang <gaoxiang25@...wei.com>
Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
Philippe Ombredanne <pombredanne@...b.com>,
Kate Stewart <kstewart@...uxfoundation.org>,
Thomas Gleixner <tglx@...utronix.de>,
linux-kernel@...r.kernel.org, Miao Xie <miaoxie@...wei.com>,
Chao Yu <chao@...nel.org>, peterz@...radead.org
Subject: Re: [PATCH v2] bit_spinlock: introduce smp_cond_load_relaxed
[+PeterZ -- please include him on stuff like this]
Hi Gao,
On Tue, Oct 30, 2018 at 02:04:41PM +0800, Gao Xiang wrote:
> It is better to use wrapped smp_cond_load_relaxed
> instead of open-coded busy waiting for bit_spinlock.
>
> Signed-off-by: Gao Xiang <gaoxiang25@...wei.com>
> ---
>
> change log v2:
> - fix the incorrect expression !(VAL >> (bitnum & (BITS_PER_LONG-1)))
> - the test result is described in the following reply.
Please include the results in the commit message, so that this change is
justified.
> diff --git a/include/linux/bit_spinlock.h b/include/linux/bit_spinlock.h
> index bbc4730a6505..d5f922b5ffd9 100644
> --- a/include/linux/bit_spinlock.h
> +++ b/include/linux/bit_spinlock.h
> @@ -15,22 +15,19 @@
> */
> static inline void bit_spin_lock(int bitnum, unsigned long *addr)
> {
> - /*
> - * Assuming the lock is uncontended, this never enters
> - * the body of the outer loop. If it is contended, then
> - * within the inner loop a non-atomic test is used to
> - * busywait with less bus contention for a good time to
> - * attempt to acquire the lock bit.
> - */
> - preempt_disable();
> #if defined(CONFIG_SMP) || defined(CONFIG_DEBUG_SPINLOCK)
> - while (unlikely(test_and_set_bit_lock(bitnum, addr))) {
> - preempt_enable();
> - do {
> - cpu_relax();
> - } while (test_bit(bitnum, addr));
> + const unsigned int bitshift = bitnum & (BITS_PER_LONG - 1);
> +
> + while (1) {
> + smp_cond_load_relaxed(&addr[BIT_WORD(bitnum)],
> + !((VAL >> bitshift) & 1));
> preempt_disable();
> + if (!test_and_set_bit_lock(bitnum, addr))
> + break;
> + preempt_enable();
> }
> +#else
> + preempt_disable();
This appears to introduce a bunch of overhead for the uncontended fastpath.
How about the much-simpler-but-completely-untested (tm) patch below?
Will
--->8
diff --git a/include/asm-generic/bitops/lock.h b/include/asm-generic/bitops/lock.h
index 3ae021368f48..9de8d3544630 100644
--- a/include/asm-generic/bitops/lock.h
+++ b/include/asm-generic/bitops/lock.h
@@ -6,6 +6,15 @@
#include <linux/compiler.h>
#include <asm/barrier.h>
+static inline void spin_until_bit_unlock(unsigned int nr,
+ volatile unsigned long *p)
+{
+ unsigned long mask = BIT_MASK(bitnum);
+
+ p += BIT_WORD(nr);
+ smp_cond_load_relaxed(p, VAL & mask);
+}
+
/**
* test_and_set_bit_lock - Set a bit and return its old value, for lock
* @nr: Bit to set
diff --git a/include/linux/bit_spinlock.h b/include/linux/bit_spinlock.h
index bbc4730a6505..d711c62e718c 100644
--- a/include/linux/bit_spinlock.h
+++ b/include/linux/bit_spinlock.h
@@ -26,9 +26,7 @@ static inline void bit_spin_lock(int bitnum, unsigned long *addr)
#if defined(CONFIG_SMP) || defined(CONFIG_DEBUG_SPINLOCK)
while (unlikely(test_and_set_bit_lock(bitnum, addr))) {
preempt_enable();
- do {
- cpu_relax();
- } while (test_bit(bitnum, addr));
+ spin_until_bit_unlock(bitnum, addr);
preempt_disable();
}
#endif
Powered by blists - more mailing lists