4.4.12-rt20-rc1 stable review patch. If anyone has any objections, please let me know. ------------------ From: Sebastian Andrzej Siewior Trace events like raw_syscalls show always a preempt code of one. The reason is that on PREEMPT kernels rcu_read_lock_sched_notrace() increases the preemption counter and the function recording the counter is caller within the RCU section. Cc: stable-rt@vger.kernel.org Signed-off-by: Sebastian Andrzej Siewior [ Changed this to upstream version. See commit e947841c0dce ] Signed-off-by: Steven Rostedt --- kernel/trace/trace_events.c | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/kernel/trace/trace_events.c b/kernel/trace/trace_events.c index 4a48f97a2256..5bd79b347398 100644 --- a/kernel/trace/trace_events.c +++ b/kernel/trace/trace_events.c @@ -246,6 +246,14 @@ void *trace_event_buffer_reserve(struct trace_event_buffer *fbuffer, local_save_flags(fbuffer->flags); fbuffer->pc = preempt_count(); + /* + * If CONFIG_PREEMPT is enabled, then the tracepoint itself disables + * preemption (adding one to the preempt_count). Since we are + * interested in the preempt_count at the time the tracepoint was + * hit, we need to subtract one to offset the increment. + */ + if (IS_ENABLED(CONFIG_PREEMPT)) + fbuffer->pc--; fbuffer->trace_file = trace_file; fbuffer->event = -- 2.8.1