lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 12 Jun 2017 16:51:43 +0200
From:   Dmitry Vyukov <dvyukov@...gle.com>
To:     Paul McKenney <paulmck@...ux.vnet.ibm.com>
Cc:     Peter Zijlstra <peterz@...radead.org>,
        LKML <linux-kernel@...r.kernel.org>,
        Ingo Molnar <mingo@...nel.org>,
        Lai Jiangshan <jiangshanlai@...il.com>, dipankar@...ibm.com,
        Andrew Morton <akpm@...ux-foundation.org>,
        Mathieu Desnoyers <mathieu.desnoyers@...icios.com>,
        Josh Triplett <josh@...htriplett.org>,
        Thomas Gleixner <tglx@...utronix.de>,
        Steven Rostedt <rostedt@...dmis.org>,
        David Howells <dhowells@...hat.com>,
        Eric Dumazet <edumazet@...gle.com>, fweisbec@...il.com,
        Oleg Nesterov <oleg@...hat.com>, bobby.prani@...il.com,
        Will Deacon <will.deacon@....com>,
        Andrea Parri <parri.andrea@...il.com>, hiralpat@...co.com,
        satishkh@...co.com, sebaddel@...co.com, kartilak@...co.com
Subject: Re: [PATCH tip/core/rcu 07/13] rcu: Add smp_mb__after_atomic() to sync_exp_work_done()

On Sat, Jun 10, 2017 at 12:56 AM, Paul E. McKenney
<paulmck@...ux.vnet.ibm.com> wrote:
>> > > > +/**
>> > > > + * spin_is_locked - Conditionally interpose after prior critical sections
>> > > > + * @lock: the spinlock whose critical sections are to be interposed.
>> > > > + *
>> > > > + * Semantically this is equivalent to a spin_trylock(), and, if
>> > > > + * the spin_trylock() succeeds, immediately followed by a (mythical)
>> > > > + * spin_unlock_relaxed().  The return value from spin_trylock() is returned
>> > > > + * by spin_is_locked().  Note that all current architectures have extremely
>> > > > + * efficient implementations in which the spin_is_locked() does not even
>> > > > + * write to the lock variable.
>> > > > + *
>> > > > + * A successful spin_is_locked() primitive in some sense "takes its place"
>> > > > + * after some critical section for the lock in question.  Any accesses
>> > > > + * following a successful spin_is_locked() call will therefore happen
>> > > > + * after any accesses by any of the preceding critical section for that
>> > > > + * same lock.  Note however, that spin_is_locked() provides absolutely no
>> > > > + * ordering guarantees for code preceding the call to that spin_is_locked().
>> > > > + */
>> > > >  static __always_inline int spin_is_locked(spinlock_t *lock)
>> > > >  {
>> > > >         return raw_spin_is_locked(&lock->rlock);
>> > >
>> > > I'm current confused on this one. The case listed in the qspinlock code
>> > > doesn't appear to exist in the kernel anymore (or at least, I'm having
>> > > trouble finding it).
>> > >
>> > > That said, I'm also not sure spin_is_locked() provides an acquire, as
>> > > that comment has an explicit smp_acquire__after_ctrl_dep();
>> >
>> > OK, I have dropped this portion of the patch for the moment.
>> >
>> > Going forward, exactly what semantics do you believe spin_is_locked()
>> > provides?
>> >
>> > Do any of the current implementations need to change to provide the
>> > semantics expected by the various use cases?
>>
>> I don't have anything other than the comment I wrote back then. I would
>> have to go audit all spin_is_locked() implementations and users (again).
>
> And Andrea (CCed) and I did a review of the v4.11 uses of
> spin_is_locked(), and none of the current uses requires any particular
> ordering.
>
> There is one very strange use of spin_is_locked() in __fnic_set_state_flags()
> in drivers/scsi/fnic/fnic_scsi.c.  This code checks spin_is_locked(),
> and then acquires the lock only if it wasn't held.  I am having a very
> hard time imagining a situation where this would do something useful.
> My guess is that the author thought that spin_is_locked() meant that
> the current CPU holds the lock, when it instead means that some CPU
> (possibly the current one, possibly not) holds the lock.
>
> Adding the FNIC guys on CC so that they can enlighten me.
>
> Ignoring the FNIC use case for the moment, anyone believe that
> spin_is_locked() needs to provide any ordering guarantees?


Not providing any ordering guarantees for spin_is_locked() sounds good to me.
Restricting all types of mutexes/locks to the simple canonical use
case (protecting a critical section of code) makes it easier to reason
about code, enables a bunch of possible static/dynamic correctness
checking and reliefs lock/unlock function from providing unnecessary
ordering (i.e. acquire in spin_is_locked() pairing with release in
spin_lock()).
Tricky uses of is_locked and try_lock can resort to atomic operations
(or maybe be removed).

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ