[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.20.1612020915530.4295@nanos>
Date: Fri, 2 Dec 2016 09:18:37 +0100 (CET)
From: Thomas Gleixner <tglx@...utronix.de>
To: Peter Zijlstra <peterz@...radead.org>
cc: LKML <linux-kernel@...r.kernel.org>,
David Daney <ddaney@...iumnetworks.com>,
Ingo Molnar <mingo@...nel.org>,
Steven Rostedt <rostedt@...dmis.org>,
Sebastian Siewior <bigeasy@...utronix.de>,
Will Deacon <will.deacon@....com>,
Mark Rutland <mark.rutland@....com>, stable@...r.kernel.org
Subject: Re: [patch 1/4] rtmutex: Prevent dequeue vs. unlock race
On Thu, 1 Dec 2016, Peter Zijlstra wrote:
> On Wed, Nov 30, 2016 at 09:04:41PM -0000, Thomas Gleixner wrote:
> > It's remarkable that the test program provided by David triggers on ARM64
> > and MIPS64 really quick, but it refuses to reproduce on x8664, while the
> > problem exists there as well. That refusal might explain that this got not
> > discovered earlier despite the bug existing from day one of the rtmutex
> > implementation more than 10 years ago.
>
> > - clear_rt_mutex_waiters(lock);
>
> So that compiles into:
>
> andq $0xfffffffffffffffe,0x48(%rbx)
>
> With is a RmW memop. Now per the architecture documents we can decompose
> that into a normal load-store and the race exists. But I would not be
> surprised if that starts with the cacheline in exclusive mode (because
> it knows it will do the store). Which makes it a very tiny race indeed.
If it really takes the cacheline exclusive right away, then there is no
race because the cmpxchg has to wait for release and will see the store.
If the cmpxchg comes first the RmW will see the new value.
Fun stuff, isn't it?
tglx
Powered by blists - more mailing lists