lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 17 Nov 2016 07:03:50 -0800
From:   "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
To:     Boqun Feng <boqun.feng@...il.com>
Cc:     Lai Jiangshan <jiangshanlai@...il.com>,
        LKML <linux-kernel@...r.kernel.org>,
        Ingo Molnar <mingo@...nel.org>, dipankar@...ibm.com,
        akpm@...ux-foundation.org,
        Mathieu Desnoyers <mathieu.desnoyers@...icios.com>,
        Josh Triplett <josh@...htriplett.org>,
        Thomas Gleixner <tglx@...utronix.de>,
        Peter Zijlstra <peterz@...radead.org>,
        Steven Rostedt <rostedt@...dmis.org>,
        David Howells <dhowells@...hat.com>, edumazet@...gle.com,
        dvhart@...ux.intel.com,
        Frédéric Weisbecker <fweisbec@...il.com>,
        oleg@...hat.com, bobby.prani@...il.com, ldr709@...il.com
Subject: Re: [PATCH RFC tip/core/rcu] SRCU rewrite

On Thu, Nov 17, 2016 at 10:31:00PM +0800, Boqun Feng wrote:
> On Thu, Nov 17, 2016 at 08:18:51PM +0800, Lai Jiangshan wrote:
> > On Tue, Nov 15, 2016 at 10:37 PM, Paul E. McKenney
> > <paulmck@...ux.vnet.ibm.com> wrote:
> > > On Tue, Nov 15, 2016 at 09:44:45AM +0800, Boqun Feng wrote:
> > 
> > >>
> > >> __srcu_read_lock() used to be called with preemption disabled. I guess
> > >> the reason was because we have two percpu variables to increase. So with
> > >> only one percpu right, could we remove the preempt_{dis,en}able() in
> > >> srcu_read_lock() and use this_cpu_inc() here?
> > >
> > > Quite possibly...
> > >
> > 
> 
> Hello, Lai ;-)
> 
> > it will be nicer if it is removed.
> > 
> > The reason for the preemption-disabled was also because we
> > have to disallow any preemption between the fetching of the idx
> > and the increasement. so that we have at most NR_CPUS worth
> > of readers using the old index that haven't incremented the counters.
> > 
> 
> After reading the comment for a while, I actually got a question, maybe
> I miss something ;-)
> 
> Why "at most NR_CPUS worth of readers using the old index haven't
> incremented the counters" could save us from overflow the counter?
> 
> Please consider the following case in current implementation:
> 
> 
> {sp->completed = 0} so idx = 1 in srcu_advance_batches(...)
> 
> one thread A is currently in __srcu_read_lock() and using idx = 1 and
> about to increase the percpu c[idx], and ULONG_MAX __srcu_read_lock()s
> have been called and returned with idx = 1, please note I think this is
> possible because I assume we may have some code like this:
> 
> 	unsigned long i = 0;
> 	for (; i < ULONG_MAX; i++)
> 		srcu_read_lock(); // return the same idx 1;

First, please don't do this.  For any number of reasons!  ;-)

Second, the theory is that if the updater fails to see the update from
one of the srcu_read_lock() calls in the loop, then the reader must see
the new index on the next pass through the loop.  Which would be one of
the problems with the above loop -- it cannot be guaranteed that they
all will return the same index.

> And none of the corresponding srcu_read_unlock() has been called;
> 
> In this case, at the time thread A increases the percpu c[idx], that
> will result in an overflow, right? So even one reader using old idx will
> result in overflow.

It is quite possible that the NR_CPUS bound is too tight, but the memory
barriers do prevent readers from seeing the old index beyond a certain
point.

> I think we won't be hit by overflow is not because we have few readers
> using old idx, it's because there are unlikely ULONG_MAX + 1
> __srcu_read_lock() called for the same idx, right? And the reason of
> this is much complex: because we won't have a fair mount of threads in
> the system, because no thread will nest srcu many levels, because there
> won't be a lot readers using old idx.
> 
> And this will still be true if we use new mechanism and shrink the
> preemption disabled section, right?

Well, the analysis needs to be revisited, for sure.  ;-)

								Thanx, Paul

> Regards,
> Boqun
> 
> > if we remove the preempt_{dis,en}able(). we must change the
> > "NR_CPUS" in the comment into ULONG_MAX/4. (I assume
> > one on-going reader needs at least need 4bytes at the stack). it is still safe.
> > 
> > but we still need to think more if we want to remove the preempt_{dis,en}able().
> > 
> > Thanks
> > Lai


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ