lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Pine.LNX.4.44L0.1904181324420.1303-100000@iolanthe.rowland.org>
Date:   Thu, 18 Apr 2019 13:44:36 -0400 (EDT)
From:   Alan Stern <stern@...land.harvard.edu>
To:     Andrea Parri <andrea.parri@...rulasolutions.com>
cc:     "Paul E. McKenney" <paulmck@...ux.ibm.com>,
        LKMM Maintainers -- Akira Yokosawa <akiyks@...il.com>,
        Boqun Feng <boqun.feng@...il.com>,
        Daniel Lustig <dlustig@...dia.com>,
        David Howells <dhowells@...hat.com>,
        Jade Alglave <j.alglave@....ac.uk>,
        Luc Maranget <luc.maranget@...ia.fr>,
        Nicholas Piggin <npiggin@...il.com>,
        Peter Zijlstra <peterz@...radead.org>,
        Will Deacon <will.deacon@....com>,
        Daniel Kroening <kroening@...ox.ac.uk>,
        Kernel development list <linux-kernel@...r.kernel.org>
Subject: Re: Adding plain accesses and detecting data races in the LKMM

On Thu, 18 Apr 2019, Andrea Parri wrote:

> > Another question is "should the kernel permit smp_mb__{before,after}*()
> > anywhere other than immediately before or after the primitive being
> > strengthened?"
> 
> Mmh, I do think that keeping these barriers "immediately before or after
> the primitive being strengthened" is a good practice (readability, and
> all that), if this is what you're suggesting.
> 
> However, a first auditing of the callsites showed that this practice is
> in fact not always applied, notably... ;-)
> 
> 	kernel/rcu/tree_exp.h:sync_exp_work_done
> 	kernel/sched/cpupri.c:cpupri_set
> 
> So there appear, at least, to be some exceptions/reasons for not always
> following it?  Thoughts?
> 
> BTW, while auditing these callsites, I've stumbled across the following
> snippet (from kernel/futex.c):
> 
> 	*futex = newval;
> 	sys_futex(WAKE, futex);
>           futex_wake(futex);
>           smp_mb(); (B)
> 	  if (waiters)
> 	    ...
> 
> where B is actually (c.f., futex_get_mm()):
> 
> 	atomic_inc(...->mm_count);
> 	smp_mb__after_atomic();
> 
> It seems worth mentioning the fact that, AFAICT, this sequence does not
> necessarily provide ordering when plain accesses are involved: consider,
> e.g., the following variant of the snippet:
> 
> 	A:*x = 1;
> 	/*
> 	 * I've "ignored" the syscall, which should provide
> 	 * (at least) a compiler barrier...
> 	 */
> 	atomic_inc(u);
> 	smp_mb__after_atomic();
> 	B:r0 = *y;
> 
> On x86, AFAICT, the compiler can do this:
> 
> 	atomic_inc(u);
> 	A:*x = 1;
> 	smp_mb__after_atomic();
> 	B:r0 = *y;
> 
> (the implementation of atomic_inc() contains no compiler barrier), then
> the CPU can "reorder" A and B (smp_mb__after_atomic() being #defined to
> a compiler barrier).

Are you saying that on x86, atomic_inc() acts as a full memory barrier 
but not as a compiler barrier, and vice versa for 
smp_mb__after_atomic()?  Or that neither atomic_inc() nor 
smp_mb__after_atomic() implements a full memory barrier?

Either one seems like a very dangerous situation indeed.

Alan

> The mips implementation seems also affected by such "reorderings": I am
> not familiar with this implementation but, AFAICT, it does not enforce
> ordering from A to B in the following snippet:
> 
> 	A:*x = 1;
> 	atomic_inc(u);
> 	smp_mb__after_atomic();
> 	B:WRITE_ONCE(*y, 1);
> 
> when CONFIG_WEAK_ORDERING=y, CONFIG_WEAK_REORDERING_BEYOND_LLSC=n.
> 
> Do these observations make sense to you?  Thoughts?
> 
>   Andrea

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ