lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20170419232352.GC3956@linux.vnet.ibm.com>
Date:   Wed, 19 Apr 2017 16:23:52 -0700
From:   "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
To:     Peter Zijlstra <peterz@...radead.org>
Cc:     linux-kernel@...r.kernel.org, mingo@...nel.org,
        jiangshanlai@...il.com, dipankar@...ibm.com,
        akpm@...ux-foundation.org, mathieu.desnoyers@...icios.com,
        josh@...htriplett.org, tglx@...utronix.de, rostedt@...dmis.org,
        dhowells@...hat.com, edumazet@...gle.com, fweisbec@...il.com,
        oleg@...hat.com, bobby.prani@...il.com, dvyukov@...gle.com,
        will.deacon@....com
Subject: Re: [PATCH tip/core/rcu 07/13] rcu: Add smp_mb__after_atomic() to
 sync_exp_work_done()

On Thu, Apr 13, 2017 at 07:51:36PM +0200, Peter Zijlstra wrote:
> On Thu, Apr 13, 2017 at 10:39:51AM -0700, Paul E. McKenney wrote:
> 
> > Well, if there are no objections, I will fix up the smp_mb__before_atomic()
> > and smp_mb__after_atomic() pieces.
> 
> Feel free.

How about if I add this in the atomic_ops.txt description of these 
two primitives?

	Preceding a non-value-returning read-modify-write atomic
	operation with smp_mb__before_atomic() and following it with
	smp_mb__after_atomic() provides the same full ordering that is
	provided by value-returning read-modify-write atomic operations.

> > I suppose that one alternative is the new variant of kerneldoc, though
> > very few of these functions have comment headers, let alone kerneldoc
> > headers.  Which reminds me, the question of spin_unlock_wait() and
> > spin_is_locked() semantics came up a bit ago.  Here is what I believe
> > to be the case.  Does this match others' expectations?
> > 
> > o	spin_unlock_wait() semantics:
> > 
> > 	1.	Any access in any critical section prior to the
> > 		spin_unlock_wait() is visible to all code following
> > 		(in program order) the spin_unlock_wait().
> > 
> > 	2.	Any access prior (in program order) to the
> > 		spin_unlock_wait() is visible to any critical
> > 		section following the spin_unlock_wait().
> > 
> > o	spin_is_locked() semantics: Half of spin_unlock_wait(),
> > 	but only if it returns false:
> > 
> > 	1.	Any access in any critical section prior to the
> > 		spin_unlock_wait() is visible to all code following
> > 		(in program order) the spin_unlock_wait().
> 
> Urgh.. yes those are pain. The best advise is to not use them.
> 
>   055ce0fd1b86 ("locking/qspinlock: Add comments")

Ah, I must confess that I missed that one.  Would you be OK with the
following patch, which adds a docbook header comment for both of them?

							Thanx, Paul

------------------------------------------------------------------------

commit 5789953adc360b4d3685dc89513655e6bfb83980
Author: Paul E. McKenney <paulmck@...ux.vnet.ibm.com>
Date:   Wed Apr 19 16:20:07 2017 -0700

    atomics: Add header comment so spin_unlock_wait() and spin_is_locked()
    
    There is material describing the ordering guarantees provided by
    spin_unlock_wait() and spin_is_locked(), but it is not necessarily
    easy to find.  This commit therefore adds a docbook header comment
    to both functions informally describing their semantics.
    
    Signed-off-by: Paul E. McKenney <paulmck@...ux.vnet.ibm.com>

diff --git a/include/linux/spinlock.h b/include/linux/spinlock.h
index 59248dcc6ef3..2647dc7f3ea9 100644
--- a/include/linux/spinlock.h
+++ b/include/linux/spinlock.h
@@ -369,11 +369,49 @@ static __always_inline int spin_trylock_irq(spinlock_t *lock)
 	raw_spin_trylock_irqsave(spinlock_check(lock), flags); \
 })
 
+/**
+ * spin_unlock_wait - Interpose between successive critical sections
+ * @lock: the spinlock whose critical sections are to be interposed.
+ *
+ * Semantically this is equivalent to a spin_lock() immediately
+ * followed by a spin_unlock().  However, most architectures have
+ * more efficient implementations in which the spin_unlock_wait()
+ * cannot block concurrent lock acquisition, and in some cases
+ * where spin_unlock_wait() does not write to the lock variable.
+ * Nevertheless, spin_unlock_wait() can have high overhead, so if
+ * you feel the need to use it, please check to see if there is
+ * a better way to get your job done.
+ *
+ * The ordering guarantees provided by spin_unlock_wait() are:
+ *
+ * 1.  All accesses preceding the spin_unlock_wait() happen before
+ *     any accesses in later critical sections for this same lock.
+ * 2.  All accesses following the spin_unlock_wait() happen after
+ *     any accesses in earlier critical sections for this same lock.
+ */
 static __always_inline void spin_unlock_wait(spinlock_t *lock)
 {
 	raw_spin_unlock_wait(&lock->rlock);
 }
 
+/**
+ * spin_is_locked - Conditionally interpose after prior critical sections
+ * @lock: the spinlock whose critical sections are to be interposed.
+ *
+ * Semantically this is equivalent to a spin_trylock(), and, if
+ * the spin_trylock() succeeds, immediately followed by a (mythical)
+ * spin_unlock_relaxed().  The return value from spin_trylock() is returned
+ * by spin_is_locked().  Note that all current architectures have extremely
+ * efficient implementations in which the spin_is_locked() does not even
+ * write to the lock variable.
+ *
+ * A successful spin_is_locked() primitive in some sense "takes its place"
+ * after some critical section for the lock in question.  Any accesses
+ * following a successful spin_is_locked() call will therefore happen
+ * after any accesses by any of the preceding critical section for that
+ * same lock.  Note however, that spin_is_locked() provides absolutely no
+ * ordering guarantees for code preceding the call to that spin_is_locked().
+ */
 static __always_inline int spin_is_locked(spinlock_t *lock)
 {
 	return raw_spin_is_locked(&lock->rlock);

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ