[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20221215165452.GA1957735@lothringen>
Date: Thu, 15 Dec 2022 17:54:52 +0100
From: Frederic Weisbecker <frederic@...nel.org>
To: "Paul E. McKenney" <paulmck@...nel.org>
Cc: boqun.feng@...il.com, joel@...lfernandes.org,
neeraj.iitr10@...il.com, urezki@...il.com, rcu@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH RFC] srcu: Yet more detail for
srcu_readers_active_idx_check() comments
On Wed, Dec 14, 2022 at 11:13:55AM -0800, Paul E. McKenney wrote:
> The comment in srcu_readers_active_idx_check() following the smp_mb()
> is out of date, hailing from a simpler time when preemption was disabled
> across the bulk of __srcu_read_lock(). The fact that preemption was
> disabled meant that the number of tasks that had fetched the old index
> but not yet incremented counters was limited by the number of CPUs.
>
> In our more complex modern times, the number of CPUs is no longer a limit.
> This commit therefore updates this comment, additionally giving more
> memory-ordering detail.
>
> Reported-by: Boqun Feng <boqun.feng@...il.com>
> Reported-by: Frederic Weisbecker <frederic@...nel.org>
Not really, while you guys were debating on that comment, I was still starring
at the previous one (as usual).
Or to put it in an SRCU way, while you guys saw the flipped idx, I was still
using the old one :)
> - * OK, how about nesting? This does impose a limit on nesting
> - * of floor(ULONG_MAX/NR_CPUS/2), which should be sufficient,
> - * especially on 64-bit systems.
> + * It can clearly do so once, given that it has already fetched
> + * the old value of ->srcu_idx and is just about to use that value
> + * to index its increment of ->srcu_lock_count[idx]. But as soon as
> + * it leaves that SRCU read-side critical section, it will increment
> + * ->srcu_unlock_count[idx], which must follow the updater's above
> + * read from that same value. Thus, as soon the reading task does
> + * an smp_mb() and a later fetch from ->srcu_idx, that task will be
> + * guaranteed to get the new index. Except that the increment of
> + * ->srcu_unlock_count[idx] in __srcu_read_unlock() is after the
> + * smp_mb(), and the fetch from ->srcu_idx in __srcu_read_lock()
> + * is before the smp_mb(). Thus, that task might not see the new
> + * value of ->srcu_idx until the -second- __srcu_read_lock(),
> + * which in turn means that this task might well increment
> + * ->srcu_lock_count[idx] for the old value of ->srcu_idx twice,
> + * not just once.
You lost me on that one.
UPDATER READER
------- ------
//srcu_readers_lock_idx //srcu_read_lock
idx = ssp->srcu_idx; idx = ssp->srcu_idx;
READ srcu_lock_count[idx ^ 1] srcu_lock_count[idx]++
smp_mb(); smp_mb();
//flip_index /* srcu_read_unlock (ignoring on purpose) */
ssp->srcu_idx++; /* smp_mb(); */
smp_mb(); /* srcu_unlock_count[old_idx]++ */
//srcu_readers_lock_idx //srcu_read_lock again
idx = ssp->srcu_idx; idx = ssp->srcu_idx;
READ srcu_lock_count[idx ^ 1]
Scenario for the reader to increment the old idx once:
_ Assume ssp->srcu_idx is initially 0.
_ The READER reads idx that is 0
_ The updater runs and flips the idx that is now 1
_ The reader resumes with 0 as an index but on the next srcu_read_lock()
it will see the new idx which is 1
What could be the scenario for it to increment the old idx twice?
Thanks.
Powered by blists - more mailing lists