lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20170610150221.GA7128@andrea>
Date:   Sat, 10 Jun 2017 17:02:21 +0200
From:   Andrea Parri <parri.andrea@...il.com>
To:     "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
        peterz@...radead.org
Cc:     mingo@...nel.org, jiangshanlai@...il.com, dipankar@...ibm.com,
        akpm@...ux-foundation.org, mathieu.desnoyers@...icios.com,
        josh@...htriplett.org, tglx@...utronix.de, rostedt@...dmis.org,
        dhowells@...hat.com, edumazet@...gle.com, fweisbec@...il.com,
        oleg@...hat.com, bobby.prani@...il.com, stern@...land.harvard.edu,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH tip/core/rcu 20/88] atomics: Add header comment so
 spin_unlock_wait()

On Thu, May 25, 2017 at 02:58:53PM -0700, Paul E. McKenney wrote:
> There is material describing the ordering guarantees provided by
> spin_unlock_wait(), but it is not necessarily easy to find.  This commit
> therefore adds a docbook header comment to this function informally
> describing its semantics.
> 
> Signed-off-by: Paul E. McKenney <paulmck@...ux.vnet.ibm.com>
> Acked-by: Peter Zijlstra <peterz@...radead.org>
> ---
>  include/linux/spinlock.h | 20 ++++++++++++++++++++
>  1 file changed, 20 insertions(+)
> 
> diff --git a/include/linux/spinlock.h b/include/linux/spinlock.h
> index 59248dcc6ef3..d9510e8522d4 100644
> --- a/include/linux/spinlock.h
> +++ b/include/linux/spinlock.h
> @@ -369,6 +369,26 @@ static __always_inline int spin_trylock_irq(spinlock_t *lock)
>  	raw_spin_trylock_irqsave(spinlock_check(lock), flags); \
>  })
>  
> +/**
> + * spin_unlock_wait - Interpose between successive critical sections
> + * @lock: the spinlock whose critical sections are to be interposed.
> + *
> + * Semantically this is equivalent to a spin_lock() immediately
> + * followed by a spin_unlock().  However, most architectures have
> + * more efficient implementations in which the spin_unlock_wait()
> + * cannot block concurrent lock acquisition, and in some cases
> + * where spin_unlock_wait() does not write to the lock variable.
> + * Nevertheless, spin_unlock_wait() can have high overhead, so if
> + * you feel the need to use it, please check to see if there is
> + * a better way to get your job done.
> + *
> + * The ordering guarantees provided by spin_unlock_wait() are:
> + *
> + * 1.  All accesses preceding the spin_unlock_wait() happen before
> + *     any accesses in later critical sections for this same lock.
> + * 2.  All accesses following the spin_unlock_wait() happen after
> + *     any accesses in earlier critical sections for this same lock.
> + */

[From a discussion with Paul, Alan]

I understand that some implementation would need to "be strengthened" to
meet this "spin_lock(); spin_unlock()" semantics; please compare with

  726328d92a42b6d4b76078e2659f43067f82c4e8
  ("locking/spinlock, arch: Update and fix spin_unlock_wait() implementations")

Should we "relax" this description?  Should we integrate it with changes
to the implementation(s)? [...]  What do you think?

  Andrea


>  static __always_inline void spin_unlock_wait(spinlock_t *lock)
>  {
>  	raw_spin_unlock_wait(&lock->rlock);
> -- 
> 2.5.2
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ