lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <f3d100b4-d849-4fd5-a9ef-8fb0cc78a884@paulmck-laptop>
Date: Wed, 23 Jul 2025 13:54:50 -0700
From: "Paul E. McKenney" <paulmck@...nel.org>
To: Steven Rostedt <rostedt@...dmis.org>
Cc: rcu@...r.kernel.org, linux-kernel@...r.kernel.org, kernel-team@...a.com
Subject: Re: [PATCH 0/6] Switch __DECLARE_TRACE() to new notrace variant of
 SRCU-fast

On Wed, Jul 23, 2025 at 04:34:50PM -0400, Steven Rostedt wrote:
> On Wed, 23 Jul 2025 13:27:54 -0700
> "Paul E. McKenney" <paulmck@...nel.org> wrote:
> 
> > This triggers continues to trigger a kernel test robot report of a
> > "using smp_processor_id() in preemptible" splat.  I looked for issues
> > with explicit preemption disabling, and, not finding any, will next turn
> > my attention to accesses to per-CPU variables.  Any and all insights
> > are welcome.
> 
> Currently perf and ftrace expect the tracepoints to be called with
> preemption disabled. You may need this:
> 
> diff --git a/include/trace/perf.h b/include/trace/perf.h
> index a1754b73a8f5..1b7925a85966 100644
> --- a/include/trace/perf.h
> +++ b/include/trace/perf.h
> @@ -71,7 +71,9 @@ perf_trace_##call(void *__data, proto)					\
>  	u64 __count __attribute__((unused));				\
>  	struct task_struct *__task __attribute__((unused));		\
>  									\
> +	preempt_disable_notrace();					\
>  	do_perf_trace_##call(__data, args);				\
> +	preempt_enable_notrace();					\
>  }
>  
>  #undef DECLARE_EVENT_SYSCALL_CLASS
> diff --git a/include/trace/trace_events.h b/include/trace/trace_events.h
> index 4f22136fd465..0504a423ca25 100644
> --- a/include/trace/trace_events.h
> +++ b/include/trace/trace_events.h
> @@ -436,7 +436,9 @@ __DECLARE_EVENT_CLASS(call, PARAMS(proto), PARAMS(args), PARAMS(tstruct), \
>  static notrace void							\
>  trace_event_raw_event_##call(void *__data, proto)			\
>  {									\
> +	preempt_disable_notrace();					\
>  	do_trace_event_raw_event_##call(__data, args);			\
> +	preempt_enable_notrace();					\
>  }
>  
>  #undef DECLARE_EVENT_SYSCALL_CLASS
> 
> 
> But please add it with the change, as there's "preempt_count" accounting to
> report to the user that accounts that preemption was disabled when called.

Thank you, Steve!  I suspect that it would have taken me one good long
time to find that one, like maybe forever.  ;-)

I am doing local testing, then will expose it to the kernel test robot,
and if all goes well, fold it in with attribution.

							Thanx, Paul

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ