[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20161201182542.GP3045@worktop.programming.kicks-ass.net>
Date: Thu, 1 Dec 2016 19:25:42 +0100
From: Peter Zijlstra <peterz@...radead.org>
To: Thomas Gleixner <tglx@...utronix.de>
Cc: LKML <linux-kernel@...r.kernel.org>,
David Daney <ddaney@...iumnetworks.com>,
Ingo Molnar <mingo@...nel.org>,
Steven Rostedt <rostedt@...dmis.org>,
Sebastian Siewior <bigeasy@...utronix.de>,
Will Deacon <will.deacon@....com>,
Mark Rutland <mark.rutland@....com>, stable@...r.kernel.org
Subject: Re: [patch 1/4] rtmutex: Prevent dequeue vs. unlock race
On Wed, Nov 30, 2016 at 09:04:41PM -0000, Thomas Gleixner wrote:
> It's remarkable that the test program provided by David triggers on ARM64
> and MIPS64 really quick, but it refuses to reproduce on x8664, while the
> problem exists there as well. That refusal might explain that this got not
> discovered earlier despite the bug existing from day one of the rtmutex
> implementation more than 10 years ago.
> - clear_rt_mutex_waiters(lock);
So that compiles into:
andq $0xfffffffffffffffe,0x48(%rbx)
With is a RmW memop. Now per the architecture documents we can decompose
that into a normal load-store and the race exists. But I would not be
surprised if that starts with the cacheline in exclusive mode (because
it knows it will do the store). Which makes it a very tiny race indeed.
Powered by blists - more mailing lists