[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZkYvemdrEOVFNtVu@lothringen>
Date: Thu, 16 May 2024 18:08:26 +0200
From: Frederic Weisbecker <frederic@...nel.org>
To: Valentin Schneider <vschneid@...hat.com>
Cc: LKML <linux-kernel@...r.kernel.org>,
"Paul E . McKenney" <paulmck@...nel.org>,
Boqun Feng <boqun.feng@...il.com>,
Joel Fernandes <joel@...lfernandes.org>,
Neeraj Upadhyay <neeraj.upadhyay@....com>,
Uladzislau Rezki <urezki@...il.com>,
Zqiang <qiang.zhang1211@...il.com>, rcu <rcu@...r.kernel.org>
Subject: Re: [PATCH 2/6] rcu: Remove superfluous full memory barrier upon
first EQS snapshot
On Thu, May 16, 2024 at 05:31:40PM +0200, Valentin Schneider wrote:
> On 15/05/24 14:53, Frederic Weisbecker wrote:
> > diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
> > index 58415cdc54f8..f5354de5644b 100644
> > --- a/kernel/rcu/tree.c
> > +++ b/kernel/rcu/tree.c
> > @@ -773,7 +773,12 @@ static void rcu_gpnum_ovf(struct rcu_node *rnp, struct rcu_data *rdp)
> > */
> > static int dyntick_save_progress_counter(struct rcu_data *rdp)
> > {
> > - rdp->dynticks_snap = rcu_dynticks_snap(rdp->cpu);
>
> So for PPC, which gets the smp_mb() at the lock acquisition, this is an
> "obvious" redundant smp_mb().
>
> For the other archs, per the definition of smp_mb__after_unlock_lock() it
> seems implied that UNLOCK+LOCK is a full memory barrier, but I wanted to
> see it explicitly stated somewhere. From a bit of spelunking below I still
> think it's the case, but is there a "better" source of truth?
>
> 01352fb81658 ("locking: Add an smp_mb__after_unlock_lock() for UNLOCK+BLOCK barrier")
> """
> The Linux kernel has traditionally required that an UNLOCK+LOCK pair act as a
> full memory barrier when either (1) that UNLOCK+LOCK pair was executed by the
> same CPU or task, or (2) the same lock variable was used for the UNLOCK and
> LOCK.
> """
>
> and
>
> https://lore.kernel.org/all/1436789704-10086-1-git-send-email-will.deacon@arm.com/
> """
> This ordering guarantee is already provided without the barrier on
> all architectures apart from PowerPC
> """
You seem to have found the accurate informations! But I must admit
they are hard to find and it would be welcome to document that properly, for example
in Documentation/memory-barriers.txt
I think the reason is that it's not supposed to be used outside RCU, perhaps
because its semantics are too fragile to use for general purpose? Even that
could be stated along in Documentation/memory-barriers.txt
Another thing is that its semantics are similar to smp_mb__after_spinlock()
(itself badly documented), although slightly different. I'm not even completely
sure how. I assume that smp_mb__after_spinlock() can be just used once to
produce the required ordering and subsequent lock on that spinlock don't need
to repeat the barrier to propagate the ordering against what is before the
smp_mb__after_spinlock. However IUUC smp_mb__after_unlock_lock() has to be
chained/repeated on all subsequent locking of the spinlock...
Thanks.
>
> > + /*
> > + * Full ordering against accesses prior current GP and also against
> ^^^^^
> prior to
>
> > + * current GP sequence number is enforced by current rnp locking
> > + * with chained smp_mb__after_unlock_lock().
> > + */
> > + rdp->dynticks_snap = ct_dynticks_cpu_acquire(rdp->cpu);
> > if (rcu_dynticks_in_eqs(rdp->dynticks_snap)) {
> > trace_rcu_fqs(rcu_state.name, rdp->gp_seq, rdp->cpu, TPS("dti"));
> > rcu_gpnum_ovf(rdp->mynode, rdp);
> > --
> > 2.44.0
>
Powered by blists - more mailing lists