lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  PHC 
Open Source and information security mailing list archives
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 24 Jan 2017 09:07:19 -0800
From:   "Paul E. McKenney" <>
To:     Lance Roy <>
Subject: Re: [PATCH] srcu: Implement more-efficient reader counts

On Mon, Jan 23, 2017 at 07:26:45PM -0800, Lance Roy wrote:
> > Yeah, we did have this same conversation awhile back, didn't we?
> > 
> > Back then, did I think to ask if this could be minimized or even prevented
> > by adding memory barriers appropriately?  ;-)
> > 
> > 							Thanx, Paul
> Yes, it can be fixed by adding a memory barrier after incrementing ->completed
> inside srcu_flip(). The upper limit on NR_CPUS turns out to be more complicated
> than this, as it needs to deal with highly nested read side critical sections
> mixed with the critical section loops, but only the one memory barrier should
> be necessary.

Something like this, then?

							Thanx, Paul


commit 35be9e413dde662fc9661352e595105ac4b0b167
Author: Paul E. McKenney <>
Date:   Tue Jan 24 08:51:34 2017 -0800

    srcu: Reduce probability of SRCU ->unlock_count[] counter overflow
    Because there are no memory barriers between the srcu_flip() ->completed
    increment and the summation of the read-side ->unlock_count[] counters,
    both the compiler and the CPU can reorder the summation with the
    ->completed increment.  If the updater is preempted long enough during
    this process, the read-side counters could overflow, resulting in a
    too-short grace period.
    This commit therefore adds a memory barrier just after the ->completed
    increment, ensuring that if the summation misses an increment of
    ->unlock_count[] from __srcu_read_unlock(), the next __srcu_read_lock()
    will see the new value of ->completed, thus bounding the number of
    ->unlock_count[] increments that can be missed to NR_CPUS.  The actual
    overflow computation is more complex due to the possibility of nesting
    of __srcu_read_lock().
    Reported-by: Lance Roy <>
    Signed-off-by: Paul E. McKenney <>

diff --git a/kernel/rcu/srcu.c b/kernel/rcu/srcu.c
index d3378ceb9762..aefe3ab20a6a 100644
--- a/kernel/rcu/srcu.c
+++ b/kernel/rcu/srcu.c
@@ -337,7 +337,16 @@ static bool try_check_zero(struct srcu_struct *sp, int idx, int trycount)
 static void srcu_flip(struct srcu_struct *sp)
-	sp->completed++;
+	WRITE_ONCE(sp->completed, sp->completed + 1);
+	/*
+	 * Ensure that if the updater misses an __srcu_read_unlock()
+	 * increment, that task's next __srcu_read_lock() will see the
+	 * above counter update.  Note that both this memory barrier
+	 * and the one in srcu_readers_active_idx_check() provide the
+	 * guarantee for __srcu_read_lock().
+	 */
+	smp_mb(); /* D */  /* Pairs with C. */

Powered by blists - more mailing lists