[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CA+55aFz6LGt35C2ex6qCM8cQDowmr9HR-+hR9bMXJPE707+k-A@mail.gmail.com>
Date: Tue, 4 Feb 2014 08:29:36 -0800
From: Linus Torvalds <torvalds@...ux-foundation.org>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Tony Luck <tony.luck@...el.com>, Fenghua Yu <fenghua.yu@...el.com>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
"linux-arch@...r.kernel.org" <linux-arch@...r.kernel.org>,
Paul McKenney <paulmck@...ux.vnet.ibm.com>,
Will Deacon <will.deacon@....com>
Subject: Re: [RFC][PATCH] ia64: Fix atomic ops vs memory barriers
On Tue, Feb 4, 2014 at 4:22 AM, Peter Zijlstra <peterz@...radead.org> wrote:
>
> The below patch assumes the SDM is right (TM), and fixes the atomic_t,
> cmpxchg() and xchg() implementations by inserting a mf before the
> cmpxchg.acq (or xchg).
You picked the wrong thing to be right. The SDM is wrong.
Last time this came up, Tony explained it thus:
>> Worse still - early processor implementations actually just ignored
>> the acquire/release and did a full fence all the time. Unfortunately
>> this meant a lot of badly written code that used .acq when they really
>> wanted .rel became legacy out in the wild - so when we made a cpu
>> that strictly did the .acq or .rel ... all that code started breaking - so
>> we had to back-pedal and keep the "legacy" behavior of a full fence :-(
and since ia64 is basically on life support as an architecture, we can
pretty much agree that the SDM is dead, and the only thing that
matters is implementation.
The above quote was strictly in the context of just cmpxchg, though,
so it's possible that the "fetchadd" instruction acts differently. I
would personally expect it to have the same issues, but let's see what
Tony says.. Tony?
Linus
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists