[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20120205220950.472021223@pcw.home.local>
Date: Sun, 05 Feb 2012 23:10:13 +0100
From: Willy Tarreau <w@....eu>
To: linux-kernel@...r.kernel.org, stable@...r.kernel.org
Cc: Milton Miller <miltonm@....com>, <linuxppc-dev@...ts.ozlabs.org>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Andi Kleen <andi@...stfloor.org>,
Nick Piggin <npiggin@...nel.dk>,
Benjamin Herrenschmidt <benh@...nel.crashing.org>,
Anton Blanchard <anton@...ba.org>,
Paul McKenney <paulmck@...ux.vnet.ibm.com>,
Eric Dumazet <eric.dumazet@...il.com>,
Thomas Gleixner <tglx@...utronix.de>,
Greg KH <gregkh@...uxfoundation.org>
Subject: [PATCH 24/91] seqlock: Dont smp_rmb in seqlock reader spin loop
2.6.27-longterm review patch. If anyone has any objections, please let us know.
------------------
commit 5db1256a5131d3b133946fa02ac9770a784e6eb2 upstream.
Move the smp_rmb after cpu_relax loop in read_seqlock and add
ACCESS_ONCE to make sure the test and return are consistent.
A multi-threaded core in the lab didn't like the update
from 2.6.35 to 2.6.36, to the point it would hang during
boot when multiple threads were active. Bisection showed
af5ab277ded04bd9bc6b048c5a2f0e7d70ef0867 (clockevents:
Remove the per cpu tick skew) as the culprit and it is
supported with stack traces showing xtime_lock waits including
tick_do_update_jiffies64 and/or update_vsyscall.
Experimentation showed the combination of cpu_relax and smp_rmb
was significantly slowing the progress of other threads sharing
the core, and this patch is effective in avoiding the hang.
A theory is the rmb is affecting the whole core while the
cpu_relax is causing a resource rebalance flush, together they
cause an interfernce cadance that is unbroken when the seqlock
reader has interrupts disabled.
At first I was confused why the refactor in
3c22cd5709e8143444a6d08682a87f4c57902df3 (kernel: optimise
seqlock) didn't affect this patch application, but after some
study that affected seqcount not seqlock. The new seqcount was
not factored back into the seqlock. I defer that the future.
While the removal of the timer interrupt offset created
contention for the xtime lock while a cpu does the
additonal work to update the system clock, the seqlock
implementation with the tight rmb spin loop goes back much
further, and is just waiting for the right trigger.
Signed-off-by: Milton Miller <miltonm@....com>
Cc: <linuxppc-dev@...ts.ozlabs.org>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: Andi Kleen <andi@...stfloor.org>
Cc: Nick Piggin <npiggin@...nel.dk>
Cc: Benjamin Herrenschmidt <benh@...nel.crashing.org>
Cc: Anton Blanchard <anton@...ba.org>
Cc: Paul McKenney <paulmck@...ux.vnet.ibm.com>
Acked-by: Eric Dumazet <eric.dumazet@...il.com>
Link: http://lkml.kernel.org/r/%3Cseqlock-rmb%40mdm.bga.com%3E
Signed-off-by: Thomas Gleixner <tglx@...utronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@...e.de>
---
include/linux/seqlock.h | 4 ++--
1 files changed, 2 insertions(+), 2 deletions(-)
Index: longterm-2.6.27/include/linux/seqlock.h
===================================================================
--- longterm-2.6.27.orig/include/linux/seqlock.h 2012-02-05 22:34:34.295914918 +0100
+++ longterm-2.6.27/include/linux/seqlock.h 2012-02-05 22:34:38.288914818 +0100
@@ -88,12 +88,12 @@
unsigned ret;
repeat:
- ret = sl->sequence;
- smp_rmb();
+ ret = ACCESS_ONCE(sl->sequence);
if (unlikely(ret & 1)) {
cpu_relax();
goto repeat;
}
+ smp_rmb();
return ret;
}
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists