lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20120222212630.GA6759@linux.vnet.ibm.com>
Date:	Wed, 22 Feb 2012 13:26:30 -0800
From:	"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
To:	Lai Jiangshan <laijs@...fujitsu.com>
Cc:	linux-kernel@...r.kernel.org, mingo@...e.hu, dipankar@...ibm.com,
	akpm@...ux-foundation.org, mathieu.desnoyers@...ymtl.ca,
	josh@...htriplett.org, niv@...ibm.com, tglx@...utronix.de,
	peterz@...radead.org, rostedt@...dmis.org, Valdis.Kletnieks@...edu,
	dhowells@...hat.com, eric.dumazet@...il.com, darren@...art.com,
	fweisbec@...il.com, patches@...aro.org
Subject: Re: [PATCH 2/3 RFC paul/rcu/srcu] srcu: only increase the upper bit
 for srcu_read_lock()

On Wed, Feb 22, 2012 at 01:20:56PM -0800, Paul E. McKenney wrote:
> On Wed, Feb 22, 2012 at 05:29:32PM +0800, Lai Jiangshan wrote:
> > >From de49bb517e6367776e2226b931346ab6c798b122 Mon Sep 17 00:00:00 2001
> > From: Lai Jiangshan <laijs@...fujitsu.com>
> > Date: Wed, 22 Feb 2012 10:41:59 +0800
> > Subject: [PATCH 2/3 RFC paul/rcu/srcu] srcu: only increase the upper bit for srcu_read_lock()
> > 
> > The algorithm/smp_mb()s ensure 'there is only one srcu_read_lock()
> > between flip and recheck for a cpu'.
> > Increment of the upper bit for srcu_read_lock() only can
> > ensure a single pair of lock/unlock change the cpu counter.
> 
> Very nice!  Also makes is more clear in that no combination of operations
> including exactly one increment can possibly wrap back to the same value,
> because the upper bit would be different.

Make that without underflow -- one increment and 2^31+1 decrements would
in fact return the counter to its original value, but that would require
cramming more than two billion tasks into a 32-bit address space, which
I believe to be sufficiently unlikely.  (Famous last words...)

							Thanx, Paul

> In deference to Peter Zijlstra's sensibilities, I changed the:
> 
> 	ACCESS_ONCE(this_cpu_ptr(sp->per_cpu_ref)->c[idx]) += -1;
> 
> to:
> 
> 	ACCESS_ONCE(this_cpu_ptr(sp->per_cpu_ref)->c[idx]) -= 1;
> 
> I did manage to resist the temptation to instead say:
> 
> 	ACCESS_ONCE(this_cpu_ptr(sp->per_cpu_ref)->c[idx]) -= +1;
> 
> ;-)
> 
> Queued, thank you!
> 
> 							Thanx, Paul
> 
> > Signed-off-by: Lai Jiangshan <laijs@...fujitsu.com>
> > ---
> >  include/linux/srcu.h |    2 +-
> >  kernel/srcu.c        |   11 +++++------
> >  2 files changed, 6 insertions(+), 7 deletions(-)
> > 
> > diff --git a/include/linux/srcu.h b/include/linux/srcu.h
> > index a478c8e..5b49d41 100644
> > --- a/include/linux/srcu.h
> > +++ b/include/linux/srcu.h
> > @@ -35,7 +35,7 @@ struct srcu_struct_array {
> >  };
> > 
> >  /* Bit definitions for field ->c above and ->snap below. */
> > -#define SRCU_USAGE_BITS		2
> > +#define SRCU_USAGE_BITS		1
> >  #define SRCU_REF_MASK		(ULONG_MAX >> SRCU_USAGE_BITS)
> >  #define SRCU_USAGE_COUNT	(SRCU_REF_MASK + 1)
> > 
> > diff --git a/kernel/srcu.c b/kernel/srcu.c
> > index 17e95bc..a51ac48 100644
> > --- a/kernel/srcu.c
> > +++ b/kernel/srcu.c
> > @@ -138,10 +138,10 @@ static bool srcu_readers_active_idx_check(struct srcu_struct *sp, int idx)
> > 
> >  	/*
> >  	 * Now, we check the ->snap array that srcu_readers_active_idx()
> > -	 * filled in from the per-CPU counter values.  Since both
> > -	 * __srcu_read_lock() and __srcu_read_unlock() increment the
> > -	 * upper bits of the per-CPU counter, an increment/decrement
> > -	 * pair will change the value of the counter.  Since there is
> > +	 * filled in from the per-CPU counter values. Since
> > +	 * __srcu_read_lock() increments the upper bits of
> > +	 * the per-CPU counter, an increment/decrement pair will
> > +	 * change the value of the counter.  Since there is
> >  	 * only one possible increment, the only way to wrap the counter
> >  	 * is to have a huge number of counter decrements, which requires
> >  	 * a huge number of tasks and huge SRCU read-side critical-section
> > @@ -234,8 +234,7 @@ void __srcu_read_unlock(struct srcu_struct *sp, int idx)
> >  {
> >  	preempt_disable();
> >  	smp_mb(); /* C */  /* Avoid leaking the critical section. */
> > -	ACCESS_ONCE(this_cpu_ptr(sp->per_cpu_ref)->c[idx]) +=
> > -		SRCU_USAGE_COUNT - 1;
> > +	ACCESS_ONCE(this_cpu_ptr(sp->per_cpu_ref)->c[idx]) += -1;
> >  	preempt_enable();
> >  }
> >  EXPORT_SYMBOL_GPL(__srcu_read_unlock);
> > -- 
> > 1.7.4.4
> > 

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ