[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20130102001038.GC13678@google.com>
Date: Tue, 1 Jan 2013 16:10:38 -0800
From: Michel Lespinasse <walken@...gle.com>
To: Rik van Riel <riel@...hat.com>
Cc: linux-kernel@...r.kernel.org, aquini@...hat.com,
lwoodman@...hat.com, jeremy@...p.org,
Jan Beulich <JBeulich@...ell.com>,
Thomas Gleixner <tglx@...utronix.de>,
Eric Dumazet <edumazet@...gle.com>,
Tom Herbert <therbert@...gle.com>
Subject: [PATCH 2/2] x86,smp: proportional backoff for ticket spinlocks
Simple fixed value proportional backoff for ticket spinlocks.
By pounding on the cacheline with the spin lock less often,
bus traffic is reduced. In cases of a data structure with
embedded spinlock, the lock holder has a better chance of
making progress.
Note that when a thread notices that it's at the head of the line for
acquiring the spinlock, this thread has already touched the spinlock's
cacheline and now holds it in shared state. At this point, extra reads
to the cache line are local to the processor and do not generate any
extra coherency traffic, until another thread (probably the spinlock
owner) writes to it. When that write occurs, the writing thread will
get exclusive access to the cacheline first, and then the waiting
thread will ask to get its shared access again. It is expected that in
many cases, the writing thread will release the spinlock before the
waiting thread gets its shared access back. For these reasons, it
seems unproductive for the head waiter to throttle its accesses;
however we do want to throttle the other waiter threads so that they
don't generate any extra coherency traffic until they can acquire the
spinlock, or at least reach the head position among waiters.
Signed-off-by: Michel Lespinasse <walken@...gle.com>
---
arch/x86/include/asm/spinlock.h | 2 ++
arch/x86/kernel/smp.c | 33 +++++++++++++++++++++++++++------
2 files changed, 29 insertions(+), 6 deletions(-)
diff --git a/arch/x86/include/asm/spinlock.h b/arch/x86/include/asm/spinlock.h
index 19e8a36b3b83..b49ae57a62c8 100644
--- a/arch/x86/include/asm/spinlock.h
+++ b/arch/x86/include/asm/spinlock.h
@@ -34,6 +34,8 @@
# define UNLOCK_LOCK_PREFIX
#endif
+extern unsigned int __read_mostly spinlock_delay;
+
extern void ticket_spin_lock_wait(arch_spinlock_t *, struct __raw_tickets);
/*
diff --git a/arch/x86/kernel/smp.c b/arch/x86/kernel/smp.c
index 20da35427bd5..eb2c49c6cc08 100644
--- a/arch/x86/kernel/smp.c
+++ b/arch/x86/kernel/smp.c
@@ -23,6 +23,7 @@
#include <linux/interrupt.h>
#include <linux/cpu.h>
#include <linux/gfp.h>
+#include <linux/debugfs.h>
#include <asm/mtrr.h>
#include <asm/tlbflush.h>
@@ -111,20 +112,40 @@
static atomic_t stopping_cpu = ATOMIC_INIT(-1);
static bool smp_no_nmi_ipi = false;
+unsigned int __read_mostly spinlock_delay = 1;
/*
* Wait on a congested ticket spinlock.
*/
void ticket_spin_lock_wait(arch_spinlock_t *lock, struct __raw_tickets inc)
{
- for (;;) {
- cpu_relax();
- inc.head = ACCESS_ONCE(lock->tickets.head);
+ __ticket_t head = inc.head, ticket = inc.tail;
+ __ticket_t waiters_ahead;
+ unsigned delay;
+ do {
+ waiters_ahead = ticket - head - 1;
+ if (!waiters_ahead) {
+ do
+ cpu_relax();
+ while (ACCESS_ONCE(lock->tickets.head) != ticket);
+ return;
+ }
+ delay = waiters_ahead * spinlock_delay;
+ do
+ cpu_relax();
+ while (delay--);
+ head = ACCESS_ONCE(lock->tickets.head);
+ } while (head != ticket);
+}
- if (inc.head == inc.tail)
- break;
- }
+#ifdef CONFIG_DEBUG_FS
+static __init int spinlock_delay_init_debug(void)
+{
+ debugfs_create_u32("spinlock_delay", 0644, NULL, &spinlock_delay);
+ return 0;
}
+late_initcall(spinlock_delay_init_debug);
+#endif
/*
* this function sends a 'reschedule' IPI to another CPU.
--
1.7.7.3
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists