[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200610131238.GA26639@lenoir>
Date: Wed, 10 Jun 2020 15:12:39 +0200
From: Frederic Weisbecker <frederic@...nel.org>
To: "Paul E. McKenney" <paulmck@...nel.org>
Cc: Joel Fernandes <joel@...lfernandes.org>,
LKML <linux-kernel@...r.kernel.org>,
Steven Rostedt <rostedt@...dmis.org>,
Mathieu Desnoyers <mathieu.desnoyers@...icios.com>,
Lai Jiangshan <jiangshanlai@...il.com>,
Josh Triplett <josh@...htriplett.org>
Subject: Re: [PATCH 01/10] rcu: Directly lock rdp->nocb_lock on nocb code
entrypoints
On Tue, Jun 09, 2020 at 11:02:27AM -0700, Paul E. McKenney wrote:
> > > > And anyway we still want to unconditionally lock on many places,
> > > > regardless of the offloaded state. I don't know how we could have
> > > > a magic helper do the unconditional lock on some places and the
> > > > conditional on others.
> > >
> > > I was assuming (perhaps incorrectly) that an intermediate phase between
> > > not-offloaded and offloaded would take care of all of those cases.
> >
> > Perhaps partly but I fear that won't be enough.
>
> One approach is to rely on RCU read-side critical sections surrounding
> the lock acquisition and to stay in the intermediate phase until a grace
> period completes, preferably call_rcu() instead of synchronize_rcu().
>
> This of course means refusing to do a transition if the CPU is still
> in the intermediate state from a prior transition.
That sounds good. But using synchronize_rcu() would be far easier. We
need to keep the hotplug and rcu barrier locked during the transition.
> > Also I've been thinking that rcu_nocb_lock() should meet any of these
> > requirements:
> >
> > * hotplug is locked
> > * rcu barrier is locked
> > * rnp is locked
> >
> > Because checking the offloaded state (when nocb isn't locked yet) of
> > an rdp without any of the above locks held is racy. And that should
> > be easy to check and prevent from copy-pasta accidents.
> >
> > What do you think?
>
> An RCU read-side critical section might be simpler.
Ok I think I can manage that.
Thanks.
Powered by blists - more mailing lists