[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20221215172521.GA2325254@paulmck-ThinkPad-P17-Gen-1>
Date: Thu, 15 Dec 2022 09:25:21 -0800
From: "Paul E. McKenney" <paulmck@...nel.org>
To: Steven Rostedt <rostedt@...dmis.org>
Cc: Masami Hiramatsu <mhiramat@...nel.org>,
LKML <linux-kernel@...r.kernel.org>,
Linux Trace Kernel <linux-trace-kernel@...r.kernel.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Mathieu Desnoyers <mathieu.desnoyers@...icios.com>
Subject: Re: [PATCH] tracing: Do not synchronize freeing of trigger filter on
boot up
On Thu, Dec 15, 2022 at 09:02:56AM -0800, Paul E. McKenney wrote:
> On Thu, Dec 15, 2022 at 10:02:41AM -0500, Steven Rostedt wrote:
> > On Wed, 14 Dec 2022 12:03:33 -0800
> > "Paul E. McKenney" <paulmck@...nel.org> wrote:
> >
> > > > > Avoid calling the synchronization function when system_state is
> > > > > SYSTEM_BOOTING.
> > > >
> > > > Shouldn't this be done inside tracepoint_synchronize_unregister()?
> > > > Then, it will prevent similar warnings if we expand boot time feature.
> > >
> > > How about the following wide-spectrum fix within RCU_LOCKDEP_WARN()?
> > > Just in case there are ever additional issues of this sort?
> >
> > Adding more tracing command line parameters is triggering this more. I now
> > hit:
>
> Fair point, and apologies for the hassle.
>
> Any chance of getting an official "it is now late enough in boot to
> safely invoke lockdep" API? ;-)
>
> In the meantime, does the (untested and quite crude) patch at the end
> of this message help?
OK, I was clearly not yet awake. :-/
The more targeted (but still untested) patch below might be more
appropriate...
Thanx, Paul
------------------------------------------------------------------------
diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
index ee8a6a711719a..f627888715dca 100644
--- a/kernel/rcu/tree.c
+++ b/kernel/rcu/tree.c
@@ -1314,7 +1314,7 @@ static void rcu_poll_gp_seq_start(unsigned long *snap)
{
struct rcu_node *rnp = rcu_get_root();
- if (rcu_init_invoked())
+ if (rcu_scheduler_active != RCU_SCHEDULER_INACTIVE)
raw_lockdep_assert_held_rcu_node(rnp);
// If RCU was idle, note beginning of GP.
@@ -1330,7 +1330,7 @@ static void rcu_poll_gp_seq_end(unsigned long *snap)
{
struct rcu_node *rnp = rcu_get_root();
- if (rcu_init_invoked())
+ if (rcu_scheduler_active != RCU_SCHEDULER_INACTIVE)
raw_lockdep_assert_held_rcu_node(rnp);
// If the previously noted GP is still in effect, record the
@@ -1353,7 +1353,8 @@ static void rcu_poll_gp_seq_start_unlocked(unsigned long *snap)
struct rcu_node *rnp = rcu_get_root();
if (rcu_init_invoked()) {
- lockdep_assert_irqs_enabled();
+ if (rcu_scheduler_active != RCU_SCHEDULER_INACTIVE)
+ lockdep_assert_irqs_enabled();
raw_spin_lock_irqsave_rcu_node(rnp, flags);
}
rcu_poll_gp_seq_start(snap);
@@ -1369,7 +1370,8 @@ static void rcu_poll_gp_seq_end_unlocked(unsigned long *snap)
struct rcu_node *rnp = rcu_get_root();
if (rcu_init_invoked()) {
- lockdep_assert_irqs_enabled();
+ if (rcu_scheduler_active != RCU_SCHEDULER_INACTIVE)
+ lockdep_assert_irqs_enabled();
raw_spin_lock_irqsave_rcu_node(rnp, flags);
}
rcu_poll_gp_seq_end(snap);
Powered by blists - more mailing lists