lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170420150826.n7r3omoy5hxbmtjw@hirez.programming.kicks-ass.net>
Date:   Thu, 20 Apr 2017 17:08:26 +0200
From:   Peter Zijlstra <peterz@...radead.org>
To:     "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
Cc:     linux-kernel@...r.kernel.org, mingo@...nel.org,
        jiangshanlai@...il.com, dipankar@...ibm.com,
        akpm@...ux-foundation.org, mathieu.desnoyers@...icios.com,
        josh@...htriplett.org, tglx@...utronix.de, rostedt@...dmis.org,
        dhowells@...hat.com, edumazet@...gle.com, fweisbec@...il.com,
        oleg@...hat.com, bobby.prani@...il.com, dvyukov@...gle.com,
        will.deacon@....com
Subject: Re: [PATCH tip/core/rcu 07/13] rcu: Add smp_mb__after_atomic() to
 sync_exp_work_done()

On Thu, Apr 20, 2017 at 08:03:21AM -0700, Paul E. McKenney wrote:
> On Thu, Apr 20, 2017 at 01:17:43PM +0200, Peter Zijlstra wrote:

> > > +/**
> > > + * spin_is_locked - Conditionally interpose after prior critical sections
> > > + * @lock: the spinlock whose critical sections are to be interposed.
> > > + *
> > > + * Semantically this is equivalent to a spin_trylock(), and, if
> > > + * the spin_trylock() succeeds, immediately followed by a (mythical)
> > > + * spin_unlock_relaxed().  The return value from spin_trylock() is returned
> > > + * by spin_is_locked().  Note that all current architectures have extremely
> > > + * efficient implementations in which the spin_is_locked() does not even
> > > + * write to the lock variable.
> > > + *
> > > + * A successful spin_is_locked() primitive in some sense "takes its place"
> > > + * after some critical section for the lock in question.  Any accesses
> > > + * following a successful spin_is_locked() call will therefore happen
> > > + * after any accesses by any of the preceding critical section for that
> > > + * same lock.  Note however, that spin_is_locked() provides absolutely no
> > > + * ordering guarantees for code preceding the call to that spin_is_locked().
> > > + */
> > >  static __always_inline int spin_is_locked(spinlock_t *lock)
> > >  {
> > >  	return raw_spin_is_locked(&lock->rlock);
> > 
> > I'm current confused on this one. The case listed in the qspinlock code
> > doesn't appear to exist in the kernel anymore (or at least, I'm having
> > trouble finding it).
> > 
> > That said, I'm also not sure spin_is_locked() provides an acquire, as
> > that comment has an explicit smp_acquire__after_ctrl_dep();
> 
> OK, I have dropped this portion of the patch for the moment.
> 
> Going forward, exactly what semantics do you believe spin_is_locked()
> provides?
> 
> Do any of the current implementations need to change to provide the
> semantics expected by the various use cases?

I don't have anything other than the comment I wrote back then. I would
have to go audit all spin_is_locked() implementations and users (again).

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ