[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20110727214804.6E36E2403FF@tassilo.jf.intel.com>
Date: Wed, 27 Jul 2011 14:48:04 -0700 (PDT)
From: Andi Kleen <andi@...stfloor.org>
To: miltonm@....com, linuxppc-dev@...ts.ozlabs.org,
torvalds@...ux-foundation.org, andi@...stfloor.org,
npiggin@...nel.dk, benh@...nel.crashing.org, anton@...ba.org,
paulmck@...ux.vnet.ibm.com, eric.dumazet@...il.com,
ak@...ux.intel.com, tglx@...utronix.de, gregkh@...e.de,
linux-kernel@...r.kernel.org, stable@...nel.org,
tim.bird@...sony.com
Subject: [PATCH] [6/99] seqlock: Don't smp_rmb in seqlock reader spin loop
2.6.35-longterm review patch. If anyone has any objections, please let me know.
------------------
From: Milton Miller <miltonm@....com>
commit 5db1256a5131d3b133946fa02ac9770a784e6eb2 upstream.
Move the smp_rmb after cpu_relax loop in read_seqlock and add
ACCESS_ONCE to make sure the test and return are consistent.
A multi-threaded core in the lab didn't like the update
from 2.6.35 to 2.6.36, to the point it would hang during
boot when multiple threads were active. Bisection showed
af5ab277ded04bd9bc6b048c5a2f0e7d70ef0867 (clockevents:
Remove the per cpu tick skew) as the culprit and it is
supported with stack traces showing xtime_lock waits including
tick_do_update_jiffies64 and/or update_vsyscall.
Experimentation showed the combination of cpu_relax and smp_rmb
was significantly slowing the progress of other threads sharing
the core, and this patch is effective in avoiding the hang.
A theory is the rmb is affecting the whole core while the
cpu_relax is causing a resource rebalance flush, together they
cause an interfernce cadance that is unbroken when the seqlock
reader has interrupts disabled.
At first I was confused why the refactor in
3c22cd5709e8143444a6d08682a87f4c57902df3 (kernel: optimise
seqlock) didn't affect this patch application, but after some
study that affected seqcount not seqlock. The new seqcount was
not factored back into the seqlock. I defer that the future.
While the removal of the timer interrupt offset created
contention for the xtime lock while a cpu does the
additonal work to update the system clock, the seqlock
implementation with the tight rmb spin loop goes back much
further, and is just waiting for the right trigger.
Signed-off-by: Milton Miller <miltonm@....com>
Cc: <linuxppc-dev@...ts.ozlabs.org>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: Andi Kleen <andi@...stfloor.org>
Cc: Nick Piggin <npiggin@...nel.dk>
Cc: Benjamin Herrenschmidt <benh@...nel.crashing.org>
Cc: Anton Blanchard <anton@...ba.org>
Cc: Paul McKenney <paulmck@...ux.vnet.ibm.com>
Acked-by: Eric Dumazet <eric.dumazet@...il.com>
Signed-off-by: Andi Kleen <ak@...ux.intel.com>
Link: http://lkml.kernel.org/r/%3Cseqlock-rmb%40mdm.bga.com%3E
Signed-off-by: Thomas Gleixner <tglx@...utronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@...e.de>
---
include/linux/seqlock.h | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
Index: linux-2.6.35.y/include/linux/seqlock.h
===================================================================
--- linux-2.6.35.y.orig/include/linux/seqlock.h
+++ linux-2.6.35.y/include/linux/seqlock.h
@@ -88,12 +88,12 @@ static __always_inline unsigned read_seq
unsigned ret;
repeat:
- ret = sl->sequence;
- smp_rmb();
+ ret = ACCESS_ONCE(sl->sequence);
if (unlikely(ret & 1)) {
cpu_relax();
goto repeat;
}
+ smp_rmb();
return ret;
}
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists