[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1305192952.3795.11.camel@edumazet-laptop>
Date: Thu, 12 May 2011 11:35:52 +0200
From: Eric Dumazet <eric.dumazet@...il.com>
To: Milton Miller <miltonm@....com>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Nick Piggin <npiggin@...nel.dk>,
Benjamin Herrenschmidt <benh@...nel.crashing.org>,
Anton Blanchard <anton@...ba.org>,
Thomas Gleixner <tglx@...utronix.de>,
linux-kernel@...r.kernel.org, linuxppc-dev@...ts.ozlabs.org,
Ingo Molnar <mingo@...e.hu>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Andi Kleen <andi@...stfloor.org>
Subject: Re: [PATCH] seqlock: don't smp_rmb in seqlock reader spin loop
Le jeudi 12 mai 2011 à 04:13 -0500, Milton Miller a écrit :
> Move the smp_rmb after cpu_relax loop in read_seqlock and add
> ACCESS_ONCE to make sure the test and return are consistent.
>
> A multi-threaded core in the lab didn't like the update
> from 2.6.35 to 2.6.36, to the point it would hang during
> boot when multiple threads were active. Bisection showed
> af5ab277ded04bd9bc6b048c5a2f0e7d70ef0867 (clockevents:
> Remove the per cpu tick skew) as the culprit and it is
> supported with stack traces showing xtime_lock waits including
> tick_do_update_jiffies64 and/or update_vsyscall.
>
> Experimentation showed the combination of cpu_relax and smp_rmb
> was significantly slowing the progress of other threads sharing
> the core, and this patch is effective in avoiding the hang.
>
> A theory is the rmb is affecting the whole core while the
> cpu_relax is causing a resource rebalance flush, together they
> cause an interfernce cadance that is unbroken when the seqlock
> reader has interrupts disabled.
>
> At first I was confused why the refactor in
> 3c22cd5709e8143444a6d08682a87f4c57902df3 (kernel: optimise
> seqlock) didn't affect this patch application, but after some
> study that affected seqcount not seqlock. The new seqcount was
> not factored back into the seqlock. I defer that the future.
>
> While the removal of the timer interrupt offset created
> contention for the xtime lock while a cpu does the
> additonal work to update the system clock, the seqlock
> implementation with the tight rmb spin loop goes back much
> further, and is just waiting for the right trigger.
>
> Cc: <stable@...r.kernel.org>
> Signed-off-by: Milton Miller <miltonm@....com>
> ---
>
> To the readers of [RFC] time: xtime_lock is held too long:
>
> I initially thought x86 would not see this because rmb would
> be a nop, but upon closer inspection X86_PPRO_FENCE will add
> a lfence for rmb.
>
> milton
>
> Index: common/include/linux/seqlock.h
> ===================================================================
> --- common.orig/include/linux/seqlock.h 2011-04-06 03:27:02.000000000 -0500
> +++ common/include/linux/seqlock.h 2011-04-06 03:35:02.000000000 -0500
> @@ -88,12 +88,12 @@ static __always_inline unsigned read_seq
> unsigned ret;
>
> repeat:
> - ret = sl->sequence;
> - smp_rmb();
> + ret = ACCESS_ONCE(sl->sequence);
> if (unlikely(ret & 1)) {
> cpu_relax();
> goto repeat;
> }
> + smp_rmb();
>
> return ret;
> }
I fully agree with your analysis. This is a call to make the change I
suggested earlier [1]. (Use a seqcount object in seqlock_t)
typedef struct {
seqcount_t seq
spinlock_t lock;
} seqlock_t;
I'll submit a patch for 2.6.40
Acked-by: Eric Dumazet <eric.dumazet@...il.com>
Thanks
[1] Ref: https://lkml.org/lkml/2011/5/6/351
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists