[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180717175232.ea7pi2bqswnzmznc@pburton-laptop>
Date: Tue, 17 Jul 2018 10:52:32 -0700
From: Paul Burton <paul.burton@...s.com>
To: Huacai Chen <chenhc@...ote.com>
Cc: Ralf Baechle <ralf@...ux-mips.org>,
James Hogan <jhogan@...nel.org>, linux-mips@...ux-mips.org,
Fuxin Zhang <zhangfx@...ote.com>,
Zhangjin Wu <wuzhangjin@...il.com>,
Huacai Chen <chenhuacai@...il.com>, stable@...r.kernel.org,
Alan Stern <stern@...land.harvard.edu>,
Andrea Parri <andrea.parri@...rulasolutions.com>,
Will Deacon <will.deacon@....com>,
Peter Zijlstra <peterz@...radead.org>,
Boqun Feng <boqun.feng@...il.com>,
Nicholas Piggin <npiggin@...il.com>,
David Howells <dhowells@...hat.com>,
Jade Alglave <j.alglave@....ac.uk>,
Luc Maranget <luc.maranget@...ia.fr>,
"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
Akira Yokosawa <akiyks@...il.com>, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] MIPS: Change definition of cpu_relax() for Loongson-3
Hi Huacai,
On Fri, Jul 13, 2018 at 03:37:57PM +0800, Huacai Chen wrote:
> Linux expects that if a CPU modifies a memory location, then that
> modification will eventually become visible to other CPUs in the system.
>
> On Loongson-3 processor with SFB (Store Fill Buffer), loads may be
> prioritised over stores so it is possible for a store operation to be
> postponed if a polling loop immediately follows it. If the variable
> being polled indirectly depends on the outstanding store [for example,
> another CPU may be polling the variable that is pending modification]
> then there is the potential for deadlock if interrupts are disabled.
> This deadlock occurs in qspinlock code.
>
> This patch changes the definition of cpu_relax() to smp_mb() for
> Loongson-3, forcing a flushing of the SFB on SMP systems before the
> next load takes place. If the Kernel is not compiled for SMP support,
> this will expand to a barrier() as before.
>
> References: 534be1d5a2da940 (ARM: 6194/1: change definition of cpu_relax() for ARM11MPCore)
> Cc: stable@...r.kernel.org
> Signed-off-by: Huacai Chen <chenhc@...ote.com>
> ---
> arch/mips/include/asm/processor.h | 10 ++++++++++
> 1 file changed, 10 insertions(+)
>
> diff --git a/arch/mips/include/asm/processor.h b/arch/mips/include/asm/processor.h
> index af34afb..a8c4a3a 100644
> --- a/arch/mips/include/asm/processor.h
> +++ b/arch/mips/include/asm/processor.h
> @@ -386,7 +386,17 @@ unsigned long get_wchan(struct task_struct *p);
> #define KSTK_ESP(tsk) (task_pt_regs(tsk)->regs[29])
> #define KSTK_STATUS(tsk) (task_pt_regs(tsk)->cp0_status)
>
> +#ifdef CONFIG_CPU_LOONGSON3
> +/*
> + * Loongson-3's SFB (Store-Fill-Buffer) may get starved when stuck in a read
> + * loop. Since spin loops of any kind should have a cpu_relax() in them, force
> + * a Store-Fill-Buffer flush from cpu_relax() such that any pending writes will
> + * become available as expected.
> + */
I think "may starve writes" or "may queue writes indefinitely" would be
clearer than "may get starved".
> +#define cpu_relax() smp_mb()
> +#else
> #define cpu_relax() barrier()
> +#endif
>
> /*
> * Return_address is a replacement for __builtin_return_address(count)
> --
> 2.7.0
Apart from the comment above though this looks better to me.
Re-copying the LKMM maintainers - are you happy(ish) with this?
Thanks,
Paul
Powered by blists - more mailing lists