[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20170124015715.GD28085@linux.vnet.ibm.com>
Date: Mon, 23 Jan 2017 17:57:15 -0800
From: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
To: Lance Roy <ldr709@...il.com>
Cc: linux-kernel@...r.kernel.org, mingo@...nel.org,
jiangshanlai@...il.com, dipankar@...ibm.com,
akpm@...ux-foundation.org, mathieu.desnoyers@...icios.com,
josh@...htriplett.org, tglx@...utronix.de, peterz@...radead.org,
rostedt@...dmis.org, dhowells@...hat.com, edumazet@...gle.com,
dvhart@...ux.intel.com, fweisbec@...il.com, oleg@...hat.com,
bobby.prani@...il.com
Subject: Re: [PATCH] srcu: Implement more-efficient reader counts
On Mon, Jan 23, 2017 at 04:53:34PM -0800, Lance Roy wrote:
> On Mon, 23 Jan 2017 16:42:52 -0800
> "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com> wrote:
>
> > On Mon, Jan 23, 2017 at 01:35:18PM -0800, Lance Roy wrote:
> > > SRCU uses two per-cpu counters: a nesting counter to count the number of
> > > active critical sections, and a sequence counter to ensure that the nesting
> > > counters don't change while they are being added together in
> > > srcu_readers_active_idx_check().
> > >
> > > This patch instead uses per-cpu lock and unlock counters. Because both
> > > counters only increase and srcu_readers_active_idx_check() reads the unlock
> > > counter before the lock counter, this achieves the same end without having
> > > to increment two different counters in srcu_read_lock(). This also saves a
> > > smp_mb() in srcu_readers_active_idx_check().
> > >
> > > Possible bug: There is no guarantee that the lock counter won't overflow
> > > during srcu_readers_active_idx_check(), as there are no memory barriers
> > > around srcu_flip() (see comment in srcu_readers_active_idx_check() for
> > > details). However, this problem was already present before this patch.
> > >
> > > Suggested-by: Mathieu Desnoyers <mathieu.desnoyers@...icios.com>
> > > Signed-off-by: Lance Roy <ldr709@...il.com>
> > > Cc: Paul E. McKenney <paulmck@...ux.vnet.ibm.com>
> > > Cc: Lai Jiangshan <jiangshanlai@...il.com>
> > > Cc: Peter Zijlstra <peterz@...radead.org>
> >
> > OK, this only has differences only in the comments, so I can reasonably
> > substitute it, even this near the next merge window.
> >
> > But let's talk about the potential overflow. If I understand correctly,
> > for this to happen, there needs to be ULONG_MAX/2 or thereabouts
> > srcu_read_lock() calls without matching srcu_read_unlock() calls.
> > Let's focus on 32-bit systems for the moment.
> >
> > Lockdep allows at most 48 locks held at a given time by any single task,
> > and RCU does not pass in a non-NULL nest_lock -- if you acquire more than
> > that, a lockdep kernel will splat. Each task has at least one 4K page of
> > kernel stack, so there can be at most 1,048,576 tasks (actually quite a
> > bit fewer due to the size of the task_struct and so on). This allows
> > at most 50,331,648 unmatched srcu_read_lock() calls in the system,
> > which is not sufficient to cause overflow.
> >
> > Or am I missing something here?
> >
> > Thanx, Paul
>
> It isn't the nest that is the problem. Here is a previous email I wrote
> describing the problem:
>
>
> The trouble is that disabling preemption is not enough to ensure that there
> is at most one srcu_read_lock() call per CPU that missed the srcu_flip().
>
> Define a reader to be an SRCU lock+unlock pair. A reader is called active if it
> has incremented ->lock_count[] but hasn't incremented ->unlock_count[] yet, and
> completed if it has incremented ->unlock_count[]. I think that we only want to
> limit the number of active readers and the number of CPUs. In particular, I
> don't think there should be a limit on the rate of completion of read side
> critical section.
>
> The goal of srcu_readers_active_idx_check() is to verify that there were zero
> active readers on the inactive index at some time during its execution. To do
> this, it totals the unlock counts, executes a memory barrier, totals the lock
> counts, and takes the difference. This difference counts the readers that are
> active when srcu_readers_lock_idx() gets to their CPU, plus the readers that
> completed after srcu_readers_unlock_idx() and before srcu_readers_lock_idx().
> If the true (infinite precision) value of the difference is zero, then there
> were no active readers at some point while srcu_readers_lock_idx() is running.
> However, the difference is only stored in a long, so there is a potential for
> overflow if too many readers complete during srcu_readers_active_idx_check().
>
> For example, let's say there are three threads, each running on their own CPU:
>
> int data, flag;
> struct srcu_struct *sp = /* ... */;
>
> Thread 0:
> data = 1;
> synchronize_srcu(sp);
> flag = 1;
>
> Thread 1:
> int data_r, flag_r;
> int idx = srcu_read_lock(sp);
> data_r = data;
> flag_r = flag;
> srcu_read_unlock(sp, idx);
> BUG_ON(flag_r == 1 && data_r == 0);
>
> Thread 2:
> while (true) {
> int idx = srcu_read_lock(sp);
> srcu_read_unlock(sp, idx);
> }
>
> Let's take the following execution order. Thread 1 increments
> the CPU 1 version of sp->lock_count[0], sets idx to zero, and loads data (0)
> into data_r. Thread 0 then sets data to be 1, verifies that there are no
> readers on index 1, and increments sp->completed, but the CPU actually doesn't
> preform the last operation, putting it off until the next memory barrier. Thread
> 0 then calls srcu_readers_active_idx_check() on index 0, which runs
> srcu_readers_unlock_idx() (returning 0). Right after srcu_readers_unlock_idx()
> completes, thread 2 starts running. Since Thread 0 hasn't actually incremented
> sp->completed in any way that is visible to thread 2, srcu_read_lock() will
> still return 0. Thread 2 can then run for ULONG_MAX iterations, setting
> the CPU 2 version of sp->unlock_count[0] to ULONG_MAX. CPU 0 then finally gets
> around to incrementing sp->completed, runs its memory barrier, and then reads
> the lock counts: 1, 0, ULONG_MAX. The total of ULONG_MAX+1 will overflow to 0
> and compare equal with earlier unlock count. Thread 0 will then think that the
> grace period is over and set flag to one. Thread 1 can then read flag (1) into
> flag_r and run srcu_read_unlock(). The BUG_ON statement will then fail.
>
> Although ULONG_MAX readers completed during srcu_readers_active_idx_check(),
> there were at most 2 active readers at any time, so this program doesn't run
> into any limit.
>
> I hope that was clear enough.
Yeah, we did have this same conversation awhile back, didn't we?
Back then, did I think to ask if this could be minimized or even prevented
by adding memory barriers appropriately? ;-)
Thanx, Paul
Powered by blists - more mailing lists