[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20220907151447.GA198228@lothringen>
Date: Wed, 7 Sep 2022 17:14:47 +0200
From: Frederic Weisbecker <frederic@...nel.org>
To: "Paul E. McKenney" <paulmck@...nel.org>
Cc: rcu@...r.kernel.org, linux-kernel@...r.kernel.org,
kernel-team@...com, rostedt@...dmis.org,
Zqiang <qiang1.zhang@...el.com>
Subject: Re: [PATCH rcu 03/10] rcu: Add QS check in rcu_exp_handler() for
non-preemptible kernels
On Wed, Sep 07, 2022 at 07:57:59AM -0700, Paul E. McKenney wrote:
> On Wed, Sep 07, 2022 at 02:10:10PM +0200, Frederic Weisbecker wrote:
> > On Wed, Aug 31, 2022 at 11:07:58AM -0700, Paul E. McKenney wrote:
> > > From: Zqiang <qiang1.zhang@...el.com>
> > >
> > > Kernels built with CONFIG_PREEMPTION=n and CONFIG_PREEMPT_COUNT=y maintain
> > > preempt_count() state. Because such kernels map __rcu_read_lock()
> > > and __rcu_read_unlock() to preempt_disable() and preempt_enable(),
> > > respectively, this allows the expedited grace period's !CONFIG_PREEMPT_RCU
> > > version of the rcu_exp_handler() IPI handler function to use
> > > preempt_count() to detect quiescent states.
> > >
> > > This preempt_count() usage might seem to risk failures due to
> > > use of implicit RCU readers in portions of the kernel under #ifndef
> > > CONFIG_PREEMPTION, except that rcu_core() already disallows such implicit
> > > RCU readers. The moral of this story is that you must use explicit
> > > read-side markings such as rcu_read_lock() or preempt_disable() even if
> > > the code knows that this kernel does not support preemption.
> > >
> > > This commit therefore adds a preempt_count()-based check for a quiescent
> > > state in the !CONFIG_PREEMPT_RCU version of the rcu_exp_handler()
> > > function for kernels built with CONFIG_PREEMPT_COUNT=y, reporting an
> > > immediate quiescent state when the interrupted code had both preemption
> > > and softirqs enabled.
> > >
> > > This change results in about a 2% reduction in expedited grace-period
> > > latency in kernels built with both CONFIG_PREEMPT_RCU=n and
> > > CONFIG_PREEMPT_COUNT=y.
> > >
> > > Signed-off-by: Zqiang <qiang1.zhang@...el.com>
> > > Signed-off-by: Paul E. McKenney <paulmck@...nel.org>
> > > Link: https://lore.kernel.org/all/20220622103549.2840087-1-qiang1.zhang@intel.com/
> > > ---
> > > kernel/rcu/tree_exp.h | 4 +++-
> > > 1 file changed, 3 insertions(+), 1 deletion(-)
> > >
> > > diff --git a/kernel/rcu/tree_exp.h b/kernel/rcu/tree_exp.h
> > > index be667583a5547..b07998159d1fa 100644
> > > --- a/kernel/rcu/tree_exp.h
> > > +++ b/kernel/rcu/tree_exp.h
> > > @@ -828,11 +828,13 @@ static void rcu_exp_handler(void *unused)
> > > {
> > > struct rcu_data *rdp = this_cpu_ptr(&rcu_data);
> > > struct rcu_node *rnp = rdp->mynode;
> > > + bool preempt_bh_enabled = !(preempt_count() & (PREEMPT_MASK |
> > > SOFTIRQ_MASK));
> >
> > I don't know if nested hardirqs still exist. I only heard old rumours
> > about broken drivers. Should we take care of them?
>
> Last I checked, certain tracing scenarios from irq handlers looked
> to RCU like nested irq handlers. Given that, does your more robust
> approach below work correctly?
I haven't observed that but in any case, the check I propose
is more strict than the one on this patch. So in the worst case it's
a QS not reported if a nested interrupt is detected.
Thanks.
Powered by blists - more mailing lists