[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20210318170937.GF2696@paulmck-ThinkPad-P72>
Date: Thu, 18 Mar 2021 10:09:37 -0700
From: "Paul E. McKenney" <paulmck@...nel.org>
To: Frederic Weisbecker <frederic@...nel.org>
Cc: rcu@...r.kernel.org, linux-kernel@...r.kernel.org,
kernel-team@...com, mingo@...nel.org, jiangshanlai@...il.com,
akpm@...ux-foundation.org, mathieu.desnoyers@...icios.com,
josh@...htriplett.org, tglx@...utronix.de, peterz@...radead.org,
rostedt@...dmis.org, dhowells@...hat.com, edumazet@...gle.com,
fweisbec@...il.com, oleg@...hat.com, joel@...lfernandes.org
Subject: Re: [PATCH tip/core/rcu 1/3] rcu: Provide polling interfaces for
Tree RCU grace periods
On Thu, Mar 18, 2021 at 03:59:52PM +0100, Frederic Weisbecker wrote:
> On Tue, Mar 16, 2021 at 09:51:01AM -0700, Paul E. McKenney wrote:
> > On Tue, Mar 16, 2021 at 04:17:50PM +0100, Frederic Weisbecker wrote:
> > > On Wed, Mar 03, 2021 at 04:26:30PM -0800, paulmck@...nel.org wrote:
> > > > +/**
> > > > + * poll_state_synchronize_rcu - Conditionally wait for an RCU grace period
> > > > + *
> > > > + * @oldstate: return from call to get_state_synchronize_rcu() or start_poll_synchronize_rcu()
> > > > + *
> > > > + * If a full RCU grace period has elapsed since the earlier call from
> > > > + * which oldstate was obtained, return @true, otherwise return @false.
> > > > + * Otherwise, invoke synchronize_rcu() to wait for a full grace period.
> > > > + *
> > > > + * Yes, this function does not take counter wrap into account.
> > > > + * But counter wrap is harmless. If the counter wraps, we have waited for
> > > > + * more than 2 billion grace periods (and way more on a 64-bit system!).
> > > > + * Those needing to keep oldstate values for very long time periods
> > > > + * (many hours even on 32-bit systems) should check them occasionally
> > > > + * and either refresh them or set a flag indicating that the grace period
> > > > + * has completed.
> > > > + */
> > > > +bool poll_state_synchronize_rcu(unsigned long oldstate)
> > > > +{
> > > > + if (rcu_seq_done(&rcu_state.gp_seq, oldstate)) {
> > > > + smp_mb(); /* Ensure GP ends before subsequent accesses. */
> > >
> > > Also as usual I'm a bit lost with the reason behind those memory barriers.
> > > So this is ordering the read on rcu_state.gp_seq against something (why not an
> > > smp_rmb() btw?). And what does it pair with?
> >
> > Because it needs to order subsequent writes as well as reads.
> >
> > It is ordering whatever the RCU user wishes to put after the call to
> > poll_state_synchronize_rcu() with whatever the RCU user put before
> > whatever started the grace period that just now completed. Please
> > see the synchronize_rcu() comment header for the statement of the
> > guarantee. Or that of call_rcu().
>
> I see. OTOH the update side's CPU had to report a quiescent state for the
> requested grace period to complete. As the quiescent state propagated along
> with full ordering up to the root rnp, everything that happened before
> rcu_seq_done() should appear before and everything that happened after
> rcu_seq_done() should appear after.
>
> Now in the case the update side's CPU is not the last CPU that reported
> a quiescent state (and thus not the one that propagated every subsequent
> CPUs QS to the final "rcu_state.gp_seq"), the full barrier after rcu_seq_done()
> is necessary to order against all the CPUs that reported a QS after the
> update side's CPU.
>
> Is that right?
That is the way I see it. ;-)
> > For more detail on how these guarantees are implemented, please see
> > Documentation/RCU/Design/Memory-Ordering/Tree-RCU-Memory-Ordering.rst
> > and its many diagrams.
>
> Indeed, very useful documentation!
Glad you like it!
> > There are a lot of memory barriers that pair and form larger cycles to
> > implement this guarantee. Pretty much all of the calls to the infamous
> > smp_mb__after_unlock_lock() macro form cycles involving this barrier,
> > for example.
> >
> > Please do not hesitate to ask more questions. This underpins RCU.
>
> Careful what you wish! ;-)
;-) ;-) ;-)
Thanx, Paul
Powered by blists - more mailing lists