[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20250801160811.1e3c63e1@gandalf.local.home>
Date: Fri, 1 Aug 2025 16:08:11 -0400
From: Steven Rostedt <rostedt@...dmis.org>
To: Steven Rostedt <rostedt@...nel.org>
Cc: linux-kernel@...r.kernel.org, linux-trace-kernel@...r.kernel.org, Masami
Hiramatsu <mhiramat@...nel.org>, Mark Rutland <mark.rutland@....com>,
Mathieu Desnoyers <mathieu.desnoyers@...icios.com>, Andrew Morton
<akpm@...ux-foundation.org>
Subject: Re: [PATCH 3/5] tracing: Add guard() around locks and mutexes in
trace.c
On Fri, 01 Aug 2025 10:25:09 -0400
Steven Rostedt <rostedt@...nel.org> wrote:
> @@ -2760,7 +2734,7 @@ trace_event_buffer_lock_reserve(struct trace_buffer **current_rb,
>
> if (!tr->no_filter_buffering_ref &&
> (trace_file->flags & (EVENT_FILE_FL_SOFT_DISABLED | EVENT_FILE_FL_FILTERED))) {
> - preempt_disable_notrace();
> + guard(preempt_notrace)();
> /*
> * Filtering is on, so try to use the per cpu buffer first.
> * This buffer will simulate a ring_buffer_event,
> @@ -2809,7 +2783,6 @@ trace_event_buffer_lock_reserve(struct trace_buffer **current_rb,
> this_cpu_dec(trace_buffered_event_cnt);
> }
> /* __trace_buffer_lock_reserve() disables preemption */
> - preempt_enable_notrace();
> }
>
> entry = __trace_buffer_lock_reserve(*current_rb, type, len,
Bah, this code is really this:
preempt_disable_notrace();
/*
* Filtering is on, so try to use the per cpu buffer first.
* This buffer will simulate a ring_buffer_event,
* where the type_len is zero and the array[0] will
* hold the full length.
* (see include/linux/ring-buffer.h for details on
* how the ring_buffer_event is structured).
*
* Using a temp buffer during filtering and copying it
* on a matched filter is quicker than writing directly
* into the ring buffer and then discarding it when
* it doesn't match. That is because the discard
* requires several atomic operations to get right.
* Copying on match and doing nothing on a failed match
* is still quicker than no copy on match, but having
* to discard out of the ring buffer on a failed match.
*/
if ((entry = __this_cpu_read(trace_buffered_event))) {
int max_len = PAGE_SIZE - struct_size(entry, array, 1);
val = this_cpu_inc_return(trace_buffered_event_cnt);
/*
* Preemption is disabled, but interrupts and NMIs
* can still come in now. If that happens after
* the above increment, then it will have to go
* back to the old method of allocating the event
* on the ring buffer, and if the filter fails, it
* will have to call ring_buffer_discard_commit()
* to remove it.
*
* Need to also check the unlikely case that the
* length is bigger than the temp buffer size.
* If that happens, then the reserve is pretty much
* guaranteed to fail, as the ring buffer currently
* only allows events less than a page. But that may
* change in the future, so let the ring buffer reserve
* handle the failure in that case.
*/
if (val == 1 && likely(len <= max_len)) {
trace_event_setup(entry, type, trace_ctx);
entry->array[0] = len;
/* Return with preemption disabled */
return entry;
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
So it must not be converted to a guard().
Let me fix that.
-- Steve
}
this_cpu_dec(trace_buffered_event_cnt);
}
/* __trace_buffer_lock_reserve() disables preemption */
preempt_enable_notrace();
Powered by blists - more mailing lists