[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Y9D5AfnOukWNOZ5q@hirez.programming.kicks-ass.net>
Date: Wed, 25 Jan 2023 10:40:17 +0100
From: Peter Zijlstra <peterz@...radead.org>
To: Mark Rutland <mark.rutland@....com>
Cc: mingo@...nel.org, will@...nel.org, boqun.feng@...il.com,
tglx@...utronix.de, bp@...en8.de, dave.hansen@...ux.intel.com,
x86@...nel.org, hpa@...or.com, seanjc@...gle.com,
pbonzini@...hat.com, jgross@...e.com, srivatsa@...il.mit.edu,
amakhalov@...are.com, pv-drivers@...are.com, rostedt@...dmis.org,
mhiramat@...nel.org, wanpengli@...cent.com, vkuznets@...hat.com,
boris.ostrovsky@...cle.com, rafael@...nel.org,
daniel.lezcano@...aro.org, juri.lelli@...hat.com,
vincent.guittot@...aro.org, dietmar.eggemann@....com,
bsegall@...gle.com, mgorman@...e.de, bristot@...hat.com,
vschneid@...hat.com, linux-kernel@...r.kernel.org,
kvm@...r.kernel.org, virtualization@...ts.linux-foundation.org,
linux-trace-kernel@...r.kernel.org, linux-pm@...r.kernel.org
Subject: Re: [PATCH 0/6] A few cpuidle vs rcu fixes
On Wed, Jan 25, 2023 at 10:35:16AM +0100, Peter Zijlstra wrote:
> tip/sched/core contains the following patch addressing this:
>
> ---
> commit 9aedeaed6fc6fe8452b9b8225e95cc2b8631ff91
> Author: Peter Zijlstra <peterz@...radead.org>
> Date: Thu Jan 12 20:43:49 2023 +0100
>
> tracing, hardirq: No moar _rcuidle() tracing
>
> Robot reported that trace_hardirqs_{on,off}() tickle the forbidden
> _rcuidle() tracepoint through local_irq_{en,dis}able().
>
> For 'sane' configs, these calls will only happen with RCU enabled and
> as such can use the regular tracepoint. This also means it's possible
> to trace them from NMI context again.
>
> Reported-by: kernel test robot <lkp@...el.com>
> Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
> Signed-off-by: Ingo Molnar <mingo@...nel.org>
> Link: https://lore.kernel.org/r/20230112195541.477416709@infradead.org
>
> diff --git a/kernel/trace/trace_preemptirq.c b/kernel/trace/trace_preemptirq.c
> index 629f2854e12b..f992444a0b1f 100644
> --- a/kernel/trace/trace_preemptirq.c
> +++ b/kernel/trace/trace_preemptirq.c
> @@ -19,6 +19,20 @@
> /* Per-cpu variable to prevent redundant calls when IRQs already off */
> static DEFINE_PER_CPU(int, tracing_irq_cpu);
>
> +/*
> + * Use regular trace points on architectures that implement noinstr
> + * tooling: these calls will only happen with RCU enabled, which can
> + * use a regular tracepoint.
> + *
> + * On older architectures, use the rcuidle tracing methods (which
> + * aren't NMI-safe - so exclude NMI contexts):
> + */
> +#ifdef CONFIG_ARCH_WANTS_NO_INSTR
> +#define trace(point) trace_##point
> +#else
> +#define trace(point) if (!in_nmi()) trace_##point##_rcuidle
> +#endif
> +
> /*
> * Like trace_hardirqs_on() but without the lockdep invocation. This is
> * used in the low level entry code where the ordering vs. RCU is important
For some reason I missed the trace_preempt_{on,off} things, so that then
gets the below on top or so.
diff --git a/kernel/trace/trace_preemptirq.c b/kernel/trace/trace_preemptirq.c
index f992444a0b1f..ea96b41c8838 100644
--- a/kernel/trace/trace_preemptirq.c
+++ b/kernel/trace/trace_preemptirq.c
@@ -100,15 +100,13 @@ NOKPROBE_SYMBOL(trace_hardirqs_off);
void trace_preempt_on(unsigned long a0, unsigned long a1)
{
- if (!in_nmi())
- trace_preempt_enable_rcuidle(a0, a1);
+ trace(preempt_enable)(a0, a1);
tracer_preempt_on(a0, a1);
}
void trace_preempt_off(unsigned long a0, unsigned long a1)
{
- if (!in_nmi())
- trace_preempt_disable_rcuidle(a0, a1);
+ trace(preempt_disable)(a0, a1);
tracer_preempt_off(a0, a1);
}
#endif
Powered by blists - more mailing lists