lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <516794a4-7a3c-4a82-6e45-de650a02a1de@gmail.com>
Date:   Sat, 20 Apr 2019 23:50:14 +0900
From:   Akira Yokosawa <akiyks@...il.com>
To:     "Paul E. McKenney" <paulmck@...ux.ibm.com>
Cc:     Andrea Parri <andrea.parri@...rulasolutions.com>,
        Alan Stern <stern@...land.harvard.edu>,
        Boqun Feng <boqun.feng@...il.com>,
        Daniel Lustig <dlustig@...dia.com>,
        David Howells <dhowells@...hat.com>,
        Jade Alglave <j.alglave@....ac.uk>,
        Luc Maranget <luc.maranget@...ia.fr>,
        Nicholas Piggin <npiggin@...il.com>,
        Peter Zijlstra <peterz@...radead.org>,
        Will Deacon <will.deacon@....com>,
        Daniel Kroening <kroening@...ox.ac.uk>,
        Kernel development list <linux-kernel@...r.kernel.org>,
        Akira Yokosawa <akiyks@...il.com>
Subject: Re: Adding plain accesses and detecting data races in the LKMM

On Fri, 19 Apr 2019 11:06:41 -0700, Paul E. McKenney wrote:
> On Sat, Apr 20, 2019 at 12:06:58AM +0900, Akira Yokosawa wrote:
>> Hi Paul,
>>
[...]
> 
>>> +     (1) The compiler can reorder the load from a to precede the
>>> +     atomic_dec(), (2) Because x86 smp_mb__before_atomic() is only a
>>> +     compiler barrier, the CPU can reorder the preceding store to
>>> +     obj->dead with the later load from a.
>>> +
>>> +     This could be avoided by using READ_ONCE(), which would prevent the
>>> +     compiler from reordering due to both atomic_dec() and READ_ONCE()
>>> +     being volatile accesses, and is usually preferable for loads from
>>> +     shared variables.  However, weakly ordered CPUs would still be
>>> +     free to reorder the atomic_dec() with the load from a, so a more
>>> +     readable option is to also use smp_mb__after_atomic() as follows:
>>
>> The point here is not just "readability", but also the portability of the
>> code, isn't it?
> 
> As Andrea noted, in this particular case, the guarantee that the
> store to obj->dead precedes the load from x is portable.  Either the
> smp_mb__before_atomic() or the atomic_dec() must provide the ordering.

I think I understood this. What I wanted to say was the code for x86 implied
in the subjunctive sentence:

	obj->dead = 1;
	smp_mb__before_atomic();
	atomic_dec(&obj->ref_count);
	r1 = READ_ONCE(x);

, which was not spelled out, is not portable if we expect the ordering of
atomic_dec() with READ_ONCE().

> However, you are right that there is some non-portability.  But this
> non-portability involves the order of the atomic_dec() and the store to x.

Yes, you've guessed it right.

> 
> So what I did was ...
> 
>>         Thanks, Akira
>>
>>> +
>>> +	WRITE_ONCE(obj->dead, 1);
>>> +	smp_mb__before_atomic();
>>> +	atomic_dec(&obj->ref_count);
>>> +	smp_mb__after_atomic();
>>> +	r1 = READ_ONCE(a);
>>> +
>>> +     This orders all three accesses against each other, and also makes
>>> +     the intent quite clear.
> 
> ... change the above paragraph to read as follows:
> 
>      In addition, the example without the smp_mb__after_atomic() does
>      not necessarily order the atomic_dec() with the load from x.
>      In contrast, the example with both smp_mb__before_atomic() and
>      smp_mb__after_atomic() orders all three accesses against each other,
>      and also makes the intent quite clear.
> 
> Does that help?

This looks a little bit redundant to me. The original one is clear
enough.

How about editing the leading sentence above:

>>> +     shared variables.  However, weakly ordered CPUs would still be
>>> +     free to reorder the atomic_dec() with the load from a, so a more
>>> +     readable option is to also use smp_mb__after_atomic() as follows:

to read as follows?

     shared variables.  However, weakly ordered CPUs would still be
     free to reorder the atomic_dec() with the load from x, so a
     portable and more readable option is to also use
     smp_mb__after_atomic() as follows:

Obviously, the interesting discussion going on in another thread will
surely affect this patch.

        Thanks, Akira

> 
> 							Thanx, Paul
> 
>>>       See Documentation/atomic_{t,bitops}.txt for more information.
>>>  
>>> diff --git a/tools/memory-model/linux-kernel.cat b/tools/memory-model/linux-kernel.cat
>>> index 8dcb37835b61..b6866f93abb8 100644
>>> --- a/tools/memory-model/linux-kernel.cat
>>> +++ b/tools/memory-model/linux-kernel.cat
>>> @@ -28,8 +28,8 @@ include "lock.cat"
>>>  let rmb = [R \ Noreturn] ; fencerel(Rmb) ; [R \ Noreturn]
>>>  let wmb = [W] ; fencerel(Wmb) ; [W]
>>>  let mb = ([M] ; fencerel(Mb) ; [M]) |
>>> -	([M] ; fencerel(Before-atomic) ; [RMW] ; po? ; [M]) |
>>> -	([M] ; po? ; [RMW] ; fencerel(After-atomic) ; [M]) |
>>> +	([M] ; fencerel(Before-atomic) ; [RMW]) |
>>> +	([RMW] ; fencerel(After-atomic) ; [M]) |
>>>  	([M] ; po? ; [LKW] ; fencerel(After-spinlock) ; [M]) |
>>>  	([M] ; po ; [UL] ; (co | po) ; [LKW] ;
>>>  		fencerel(After-unlock-lock) ; [M])
>>>
>>
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ