[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.21.1809271644120.8118@nanos.tec.linutronix.de>
Date: Thu, 27 Sep 2018 16:47:47 +0200 (CEST)
From: Thomas Gleixner <tglx@...utronix.de>
To: Kurt Kanzenbach <kurt.kanzenbach@...utronix.de>
cc: Will Deacon <will.deacon@....com>,
Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
linux-kernel@...r.kernel.org,
Daniel Wagner <daniel.wagner@...mens.com>,
Peter Zijlstra <peterz@...radead.org>, x86@...nel.org,
Linus Torvalds <torvalds@...ux-foundation.org>,
"H. Peter Anvin" <hpa@...or.com>,
Boqun Feng <boqun.feng@...il.com>,
"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
Mark Rutland <mark.rutland@....com>
Subject: Re: [Problem] Cache line starvation
On Thu, 27 Sep 2018, Kurt Kanzenbach wrote:
> On Thu, Sep 27, 2018 at 04:25:47PM +0200, Kurt Kanzenbach wrote:
> > However, the issue still triggers fine. With stress-ng we're able to
> > generate latency in millisecond range. The only workaround we've found
> > so far is to add a "delay" in cpu_relax().
>
> It might interesting for you, how we added the delay. We've used:
>
> static inline void cpu_relax(void)
> {
> volatile int i = 0;
>
> asm volatile("yield" ::: "memory");
> while (i++ <= 1000);
> }
>
> Of course it's not efficient, but it works.
I wonder if it's just the store on the stack which makes it work. I've seen
that when instrumenting x86. When the careful instrumentation just stayed
in registers it failed. Once it was too much and stack got involved it
vanished away.
Thanks,
tglx
Powered by blists - more mailing lists