lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20221215185326.GJ4001@paulmck-ThinkPad-P17-Gen-1>
Date:   Thu, 15 Dec 2022 10:53:26 -0800
From:   "Paul E. McKenney" <paulmck@...nel.org>
To:     Frederic Weisbecker <frederic@...nel.org>
Cc:     boqun.feng@...il.com, joel@...lfernandes.org,
        neeraj.iitr10@...il.com, urezki@...il.com, rcu@...r.kernel.org,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH RFC] srcu: Yet more detail for
 srcu_readers_active_idx_check() comments

On Thu, Dec 15, 2022 at 06:58:30PM +0100, Frederic Weisbecker wrote:
> On Thu, Dec 15, 2022 at 09:08:34AM -0800, Paul E. McKenney wrote:
> > On Thu, Dec 15, 2022 at 05:54:52PM +0100, Frederic Weisbecker wrote:
> > > On Wed, Dec 14, 2022 at 11:13:55AM -0800, Paul E. McKenney wrote:
> > > > The comment in srcu_readers_active_idx_check() following the smp_mb()
> > > > is out of date, hailing from a simpler time when preemption was disabled
> > > > across the bulk of __srcu_read_lock().  The fact that preemption was
> > > > disabled meant that the number of tasks that had fetched the old index
> > > > but not yet incremented counters was limited by the number of CPUs.
> > > > 
> > > > In our more complex modern times, the number of CPUs is no longer a limit.
> > > > This commit therefore updates this comment, additionally giving more
> > > > memory-ordering detail.
> > > > 
> > > > Reported-by: Boqun Feng <boqun.feng@...il.com>
> > > > Reported-by: Frederic Weisbecker <frederic@...nel.org>
> > > 
> > > Not really, while you guys were debating on that comment, I was still starring
> > > at the previous one (as usual).
> > > 
> > > Or to put it in an SRCU way, while you guys saw the flipped idx, I was still
> > > using the old one :)
> > > 
> > > > -	 * OK, how about nesting?  This does impose a limit on nesting
> > > > -	 * of floor(ULONG_MAX/NR_CPUS/2), which should be sufficient,
> > > > -	 * especially on 64-bit systems.
> > > > +	 * It can clearly do so once, given that it has already fetched
> > > > +	 * the old value of ->srcu_idx and is just about to use that value
> > > > +	 * to index its increment of ->srcu_lock_count[idx].  But as soon as
> > > > +	 * it leaves that SRCU read-side critical section, it will increment
> > > > +	 * ->srcu_unlock_count[idx], which must follow the updater's above
> > > > +	 * read from that same value.  Thus, as soon the reading task does
> > > > +	 * an smp_mb() and a later fetch from ->srcu_idx, that task will be
> > > > +	 * guaranteed to get the new index.  Except that the increment of
> > > > +	 * ->srcu_unlock_count[idx] in __srcu_read_unlock() is after the
> > > > +	 * smp_mb(), and the fetch from ->srcu_idx in __srcu_read_lock()
> > > > +	 * is before the smp_mb().  Thus, that task might not see the new
> > > > +	 * value of ->srcu_idx until the -second- __srcu_read_lock(),
> > > > +	 * which in turn means that this task might well increment
> > > > +	 * ->srcu_lock_count[idx] for the old value of ->srcu_idx twice,
> > > > +	 * not just once.
> > > 
> > > You lost me on that one.
> > > 
> > >       UPDATER                               READER
> > >       -------                               ------
> > >       //srcu_readers_lock_idx               //srcu_read_lock
> > >       idx = ssp->srcu_idx;                  idx = ssp->srcu_idx;
> > >       READ srcu_lock_count[idx ^ 1]         srcu_lock_count[idx]++
> > 
> > Shouldn't this be "READ srcu_unlock_count[idx ^ 1]"?
> > 
> > And then the above paragraph assumes that the updater gets stuck here...
> 
> Right I ignored the unlock part on purpose. But ok let's add it (later note: just switch
> directly to the next paragraph to see how I realize I'm wrong)

I do know that feeling!  There are very few things that instill a healthy
sense of humility quite like working with concurrent code.  ;-)

>       UPDATER                               READER
>       -------                               ------
>       idx = ssp->srcu_idx;                  idx = ssp->srcu_idx;
>       READ srcu_unlock_count[idx ^ 1]       srcu_lock_count[idx]++
>       smp_mb();                             smp_mb();
>       READ srcu_lock_count[idx ^ 1]         // read side crit
>       smp_mb();                             smp_mb();
>       idx = ssp->srcu_idx;                  srcu_unlock_count[old_idx]++
>       ssp->srcu_idx++;                      idx = ssp->srcu_idx;
>       smp_mb();                             
>       READ srcu_unlock_count[idx ^ 1]
>       smp_mb();
>       READ srcu_lock_count[idx ^ 1]  
> 
> > Unless I am missing something, the reader must reference the
> > srcu_unlock_count[old_idx] and then do smp_mb() before it will be
> > absolutely guaranteed of seeing the new value of ->srcu_idx.
> > 
> > So what am I missing?
> 
> But there is the smp_mb() between the srcu_lock_count[idx]++ of the 1st
> srcu_read_lock() and the idx READ from the second srcu_read_lock():
> 
>          WRITER                                READER
>          -----                                 -------
>          WRITE idx                             WRITE srcu_lock_count[old_idx]
>          smp_mb()                              smp_mb()
>          READ srcu_lock_count[new_idx]         READ idx
> 
> Ah wait! On SCAN2 we are reading the count from the _new_ idx, not the old one, ok
> that's why it doesn't work. So then for it to write twice on the old idx we have:
> 
> _ idx is initially 0
> _ READER fetches idx (idx=0) and is preempted
> _ New GP: Updater goes through its whole thing and flips idx
> _ Yet another new GP: Updater goes again but is preempted in the middle of
> SCAN1: it has read unlock_count but not yet lock_count
> _ READER increments lock_count, then unlock_count, for the old idx (0).
> _ New READER: indeed we don't have a barrier between unlock_count and idx read.
>   So we read again the idx unordered against the previous WRITE to unlock_count.
>   So this may be still the old idx (0): we increment lock_count, there goes the
>   desired smp_mb(), we increment unlock_count of the old idx (0).
> _ Yet another READER: finally we see the new idx (1).
> 
> Phew! Did I get it right this time? :))

Either you got it right or we both got it wrong in the same way.  ;-)

							Thanx, Paul

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ