[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20181002063159.r4hxljpzyxpsdg5s@helium.monom.org>
Date: Tue, 2 Oct 2018 08:31:59 +0200
From: Daniel Wagner <wagi@...om.org>
To: Sebastian Andrzej Siewior <bigeasy@...utronix.de>
Cc: linux-kernel@...r.kernel.org,
Daniel Wagner <daniel.wagner@...mens.com>,
Peter Zijlstra <peterz@...radead.org>,
Will Deacon <will.deacon@....com>, x86@...nel.org,
Linus Torvalds <torvalds@...ux-foundation.org>,
"H. Peter Anvin" <hpa@...or.com>,
Boqun Feng <boqun.feng@...il.com>,
"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
Subject: Re: [Problem] Cache line starvation
On Fri, Sep 21, 2018 at 02:02:26PM +0200, Sebastian Andrzej Siewior wrote:
> This matches Daniel Wagner's observations which he described in [0] on
> v4.4-RT.
Peter Z recommended to drop to ticket spinlocks instead trying to port
back all the qspinlock changes to v4.4-rt.
With ticket spinlocks, 'stress-ng --ptrace 4' run for 50 hours without
a problem (before it was seconds) and my normal workload for -rt testing
for 60 hours without a problem (before it broke within 24h).
The cyclictest max values went slightly down from 32us to 30us but
that might just be coincidence.
Thanks,
Daniel
Powered by blists - more mailing lists