[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190514154636.GF2677@hirez.programming.kicks-ass.net>
Date: Tue, 14 May 2019 17:46:36 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: huangpei@...ngson.cn
Cc: Paul Burton <paul.burton@...s.com>,
"stern@...land.harvard.edu" <stern@...land.harvard.edu>,
"akiyks@...il.com" <akiyks@...il.com>,
"andrea.parri@...rulasolutions.com"
<andrea.parri@...rulasolutions.com>,
"boqun.feng@...il.com" <boqun.feng@...il.com>,
"dlustig@...dia.com" <dlustig@...dia.com>,
"dhowells@...hat.com" <dhowells@...hat.com>,
"j.alglave@....ac.uk" <j.alglave@....ac.uk>,
"luc.maranget@...ia.fr" <luc.maranget@...ia.fr>,
"npiggin@...il.com" <npiggin@...il.com>,
"paulmck@...ux.ibm.com" <paulmck@...ux.ibm.com>,
"will.deacon@....com" <will.deacon@....com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"torvalds@...ux-foundation.org" <torvalds@...ux-foundation.org>,
Huacai Chen <chenhc@...ote.com>
Subject: Re: Re: Re: Re: Re: [RFC][PATCH 2/5] mips/atomic: Fix
loongson_llsc_mb() wreckage
(sorry for the delay, I got sidetracked elsewhere)
On Fri, Apr 26, 2019 at 10:57:20AM +0800, huangpei@...ngson.cn wrote:
> > -----原始邮件-----
> > On Thu, Apr 25, 2019 at 08:51:17PM +0800, huangpei@...ngson.cn wrote:
> >
> > > > So basically the initial value of @v is set to 1.
> > > >
> > > > Then CPU-1 does atomic_add_unless(v, 1, 0)
> > > > CPU-2 does atomic_set(v, 0)
> > > >
> > > > If CPU1 goes first, it will see 1, which is not 0 and thus add 1 to 1
> > > > and obtains 2. Then CPU2 goes and writes 0, so the exist clause sees
> > > > v==0 and doesn't observe 2.
> > > >
> > > > The other way around, CPU-2 goes first, writes a 0, then CPU-1 goes and
> > > > observes the 0, finds it matches 0 and doesn't add. Again, the exist
> > > > clause will find 0 doesn't match 2.
> > > >
> > > > This all goes unstuck if interleaved like:
> > > >
> > > >
> > > > CPU-1 CPU-2
> > > >
> > > > xor t0, t0
> > > > 1: ll t0, v
> > > > bez t0, 2f
> > > > sw t0, v
> > > > add t0, t1
> > > > sc t0, v
> > > > beqz t0, 1b
> > > >
> > > > (sorry if I got the MIPS asm wrong; it's not something I normally write)
> > > >
> > > > And the store-word from CPU-2 doesn't make the SC from CPU-1 fail.
> > > >
> > >
> > > loongson's llsc bug DOES NOT fail this litmus( we will not get V=2);
> > >
> > > only speculative memory access from CPU-1 can "blind" CPU-1(here blind means do ll/sc
> > > wrong), this speculative memory access can be observed corrently by CPU2. In this
> > > case, sw from CPU-2 can get I , which can be observed by CPU-1, and clear llbit,then
> > > failed sc.
> >
> > I'm not following, suppose CPU-1 happens as a speculation (imagine
> > whatever code is required to make that happen before). CPU-2 sw will
> > cause I on CPU-1's ll but, as in the previous email, CPU-1 will continue
> > as if it still has E and complete the SC.
> >
> > That is; I'm just not seeing why this case would be different from two
> > competing LL/SCs.
> >
>
> I get your point. I kept my eye on the sw from CPU-2, but forgot the speculative
> mem access from CPU-1.
>
> There is no difference bewteen this one and the former case.
>
> =========================================================================
> V = 1
>
> CPU-1 CPU-2
>
> xor t0, t0
> 1: ll t0, V
> beqz t0, 2f
>
> /* if speculative mem
> access kick cacheline of
> V out, it can blind CPU-1
> and make CPU-1 believe it
> still hold E on V, and can
> NOT see the sw from CPU-2
> actually invalid V, which
> should clear LLBit of CPU-1,
> but not */
> sw t0, V // just after sw, V = 0
> addiu t0, t0, 1
>
> sc t0, V
> /* oops, sc write t0(2)
> into V with LLBit */
>
> /* get V=2 */
> beqz t0, 1b
> nop
> 2:
> ================================================================================
>
> if speculative mem access *does not* kick out cache line of V, CPU-1 can see sw
> from CPU-2, and clear LLBit, which cause sc fail and retry, That's OK
OK; so do I understand it correctly that your CPU _can_ in fact fail
that test and result in 2? If so I think I'm (finally) understanding :-)
Powered by blists - more mailing lists