lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-Id: <20161118132754.GP3612@linux.vnet.ibm.com>
Date:   Fri, 18 Nov 2016 05:27:54 -0800
From:   "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
To:     Lance Roy <ldr709@...il.com>
Cc:     Lai Jiangshan <jiangshanlai@...il.com>,
        LKML <linux-kernel@...r.kernel.org>,
        Ingo Molnar <mingo@...nel.org>, dipankar@...ibm.com,
        akpm@...ux-foundation.org,
        Mathieu Desnoyers <mathieu.desnoyers@...icios.com>,
        Josh Triplett <josh@...htriplett.org>,
        Thomas Gleixner <tglx@...utronix.de>,
        Peter Zijlstra <peterz@...radead.org>,
        Steven Rostedt <rostedt@...dmis.org>,
        David Howells <dhowells@...hat.com>,
        Eric Dumazet <edumazet@...gle.com>, dvhart@...ux.intel.com,
        Frédéric Weisbecker <fweisbec@...il.com>,
        oleg@...hat.com, pranith kumar <bobby.prani@...il.com>
Subject: Re: [PATCH RFC tip/core/rcu] SRCU rewrite

On Thu, Nov 17, 2016 at 11:53:04AM -0800, Lance Roy wrote:
> On Thu, 17 Nov 2016 21:58:34 +0800
> Lai Jiangshan <jiangshanlai@...il.com> wrote:
> > from the changelog, it sounds like that "ULONG_MAX - NR_CPUS" is the limit
> > of the implements(old or this one). but actually the real max number of
> > active readers is much smaller, I think ULONG_MAX/4 can be used here instead
> > and that part of the changelog can be removed.
> In the old version, there are two separate limits. There first is that there
> are no more than ULONG_MAX nested or parallel readers, as otherwise ->c[] would
> overflow.
> 
> The other limit is to prevent ->seq[] from overflowing during
> srcu_readers_active_idx_check(). For this to happen, there must be ULONG_MAX+1
> readers that loaded ->completed before srcu_flip() was run which then increment
> ->seq[]. The ->seq[] array is supposed to prevent
> srcu_readers_active_idx_check() from completing successfully if any such
> readers increment ->seq[], because otherwise they could decrement ->c[] while
> it is being read, which could cause it to incorrectly report that there are no
> active readers. If ->seq[] overflows then there is nothing (except how
> improbable it is) to prevent this from happening.
> 
> I used to think (because of the previous comment) that there could be at most
> one such increment of ->seq[] per CPU, as they would have to be using to old
> value of ->completed and preemption would be disabled. This is not the case
> because there are no barriers around srcu_flip(), so the processor is not
> required to increment ->completed before reading ->seq[] the first time, nor is
> it required to wait until it is done reading ->seq[] the second time before
> incrementing. This means that the following code could cause ->seq[] to
> increment an arbitrarily large number of times between the two ->seq[] loads in
> srcu_readers_active_idx_check().
> 	while (true) {
> 		int idx = srcu_read_lock(sp);
> 		srcu_read_unlock(sp, idx);
> 	}

I also initially thought that there would need to be a memory barrier
immediately after srcu_flip().  But after further thought, I don't
believe that this is the case.

The key point is that updaters do the flip, sum the unlock counters,
do a full memory barrier, then sum the lock counters.

We therefore know that if an updater sees an unlock, it is guaranteed
to see the corresponding lock.  Which prevents negative sums.  However,
it is true that the flip and the unlock reads can be interchanged.
This can result in failing to see a count of zero, but it cannot result
in spuriously seeing a count of zero.

More to this point, if an updater fails to see a lock, the next time
that CPU/task does an srcu_read_lock(), that CPU/task is guaranteed
to see the new value of the index.  This limits the number of CPUs/tasks
that can be using the old value of the index.  Given that preemption
is disabled across the fetch of the index and the increment of the lock
count, that number is NR_CPUS-1, given that the updater has to be running
on one of the CPUs (as Mathieu pointed out earlier in this thread).

Or am I missing something?

							Thanx, Paul

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ