[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190828211904.GX26530@linux.ibm.com>
Date: Wed, 28 Aug 2019 14:19:04 -0700
From: "Paul E. McKenney" <paulmck@...nel.org>
To: Joel Fernandes <joel@...lfernandes.org>
Cc: linux-kernel@...r.kernel.org,
Frederic Weisbecker <fweisbec@...il.com>,
Jonathan Corbet <corbet@....net>,
Josh Triplett <josh@...htriplett.org>, kernel-team@...roid.com,
Lai Jiangshan <jiangshanlai@...il.com>,
linux-doc@...r.kernel.org,
Mathieu Desnoyers <mathieu.desnoyers@...icios.com>,
Mauro Carvalho Chehab <mchehab+samsung@...nel.org>,
rcu@...r.kernel.org, Steven Rostedt <rostedt@...dmis.org>
Subject: Re: [RFC v1 2/2] rcu/tree: Remove dynticks_nmi_nesting counter
On Wed, Aug 28, 2019 at 05:05:25PM -0400, Joel Fernandes wrote:
> On Wed, Aug 28, 2019 at 01:23:30PM -0700, Paul E. McKenney wrote:
> > On Mon, Aug 26, 2019 at 09:33:54PM -0400, Joel Fernandes (Google) wrote:
> > > The dynticks_nmi_nesting counter serves 4 purposes:
> > >
> > > (a) rcu_is_cpu_rrupt_from_idle() needs to be able to detect first
> > > interrupt nesting level.
> > >
> > > (b) We need to detect half-interrupts till we are sure they're not an
> > > issue. However, change the comparison to DYNTICK_IRQ_NONIDLE with 0.
> > >
> > > (c) When a quiescent state report is needed from a nohz_full CPU.
> > > The nesting counter detects we are a first level interrupt.
> > >
> > > For (a) we can just use dyntick_nesting == 1 to determine this. Only the
> > > outermost interrupt that interrupted an RCU-idle state can set it to 1.
> > >
> > > For (b), this warning condition has not occurred for several kernel
> > > releases. But we still keep the warning but change it to use
> > > in_interrupt() instead of the nesting counter. In a later year, we can
> > > remove the warning.
> > >
> > > For (c), the nest check is not really necessary since forced_tick would
> > > have been set to true in the outermost interrupt, so the nested/NMI
> > > interrupts will check forced_tick anyway, and bail.
> >
> > Skipping the commit log and documentation for this pass.
> [snip]
> > > diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
> > > index 255cd6835526..1465a3e406f8 100644
> > > --- a/kernel/rcu/tree.c
> > > +++ b/kernel/rcu/tree.c
> > > @@ -81,7 +81,6 @@
> > >
> > > static DEFINE_PER_CPU_SHARED_ALIGNED(struct rcu_data, rcu_data) = {
> > > .dynticks_nesting = 1,
> > > - .dynticks_nmi_nesting = 0,
> >
> > This should be in the previous patch, give or take naming.
>
> Done.
>
> > > .dynticks = ATOMIC_INIT(RCU_DYNTICK_CTRL_CTR),
> > > };
> > > struct rcu_state rcu_state = {
> > > @@ -392,15 +391,9 @@ static int rcu_is_cpu_rrupt_from_idle(void)
> > > /* Check for counter underflows */
> > > RCU_LOCKDEP_WARN(__this_cpu_read(rcu_data.dynticks_nesting) < 0,
> > > "RCU dynticks_nesting counter underflow!");
> > > - RCU_LOCKDEP_WARN(__this_cpu_read(rcu_data.dynticks_nmi_nesting) <= 0,
> > > - "RCU dynticks_nmi_nesting counter underflow/zero!");
> > >
> > > - /* Are we at first interrupt nesting level? */
> > > - if (__this_cpu_read(rcu_data.dynticks_nmi_nesting) != 1)
> > > - return false;
> > > -
> > > - /* Does CPU appear to be idle from an RCU standpoint? */
> > > - return __this_cpu_read(rcu_data.dynticks_nesting) == 0;
> > > + /* Are we the outermost interrupt that arrived when RCU was idle? */
> > > + return __this_cpu_read(rcu_data.dynticks_nesting) == 1;
> > > }
> > >
> > > #define DEFAULT_RCU_BLIMIT 10 /* Maximum callbacks per rcu_do_batch ... */
> > > @@ -564,11 +557,10 @@ static void rcu_eqs_enter(bool user)
> > > struct rcu_data *rdp = this_cpu_ptr(&rcu_data);
> > >
> > > /* Entering usermode/idle from interrupt is not handled. These would
> > > - * mean usermode upcalls or idle entry happened from interrupts. But,
> > > - * reset the counter if we warn.
> > > + * mean usermode upcalls or idle exit happened from interrupts. Remove
> > > + * the warning by 2020.
> > > */
> > > - if (WARN_ON_ONCE(rdp->dynticks_nmi_nesting != 0))
> > > - WRITE_ONCE(rdp->dynticks_nmi_nesting, 0);
> > > + WARN_ON_ONCE(in_interrupt());
> >
> > And this is a red flag. Bad things happen should some common code
> > that disables BH be invoked from the idle loop. This might not be
> > happening now, but we need to avoid this sort of constraint.
> > How about instead merging ->dyntick_nesting into the low-order bits
> > of ->dyntick_nmi_nesting?
> >
> > Yes, this assumes that we don't enter process level twice, but it should
> > be easy to add a WARN_ON() to test for that. Except that we don't have
> > to because there is already this near the end of rcu_eqs_exit():
> >
> > WARN_ON_ONCE(rdp->dynticks_nmi_nesting);
> >
> > So the low-order bit of the combined counter could indicate process-level
> > non-idle, the next three bits could be unused to make interpretation
> > of hex printouts easier, and then the rest of the bits could be used in
> > the same way as currently.
> >
> > This would allow a single read to see the full state, so that 0x1 means
> > at process level in the kernel, 0x11 is interrupt (or NMI) from process
> > level, 0x10 is interrupt/NMI from idle/user, and so on.
> >
> > What am I missing here? Why wouldn't this work, and without adding yet
> > another RCU-imposed constraint on some other subsystem?
>
> What about replacing the warning with a WARN_ON_ONCE(in_irq()), would that
> address your concern?
>
> Also, considering this warning condition is most likely never occurring as we
> know it, and we are considering deleting it soon enough, is it really worth
> reimplementing the whole mechanism with a complex bit-sharing scheme just
> because of the BH-disable condition you mentioned, which likely doesn't
> happen today? In my implementation, this is just a simple counter. I feel
> combining bits in the same counter will just introduce more complexity that
> this patch tries to address/avoid.
>
> OTOH, I also don't mind with just deleting the warning altogether if you are
> Ok with that.
The big advantage of combining the counters is that all of the state is
explicit and visible in one place. Plus it can be accessed atomically.
And it avoids setting a time bomb for some poor guys just trying to get
their idle-loop jobs done some time in the dim distant future.
Besides, this pair of patches already makes a large change from a
conceptual viewpoint. If we are going to make a large change, let's
get our money's worth out of that change!
Thanx, Paul
Powered by blists - more mailing lists