lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <6dc21f67-52e1-4ed5-af7f-f047c3c22c11@efficios.com>
Date: Thu, 3 Oct 2024 20:26:29 -0400
From: Mathieu Desnoyers <mathieu.desnoyers@...icios.com>
To: Steven Rostedt <rostedt@...dmis.org>
Cc: Masami Hiramatsu <mhiramat@...nel.org>, linux-kernel@...r.kernel.org,
 Peter Zijlstra <peterz@...radead.org>, Alexei Starovoitov <ast@...nel.org>,
 Yonghong Song <yhs@...com>, "Paul E . McKenney" <paulmck@...nel.org>,
 Ingo Molnar <mingo@...hat.com>, Arnaldo Carvalho de Melo <acme@...nel.org>,
 Mark Rutland <mark.rutland@....com>,
 Alexander Shishkin <alexander.shishkin@...ux.intel.com>,
 Namhyung Kim <namhyung@...nel.org>,
 Andrii Nakryiko <andrii.nakryiko@...il.com>, bpf@...r.kernel.org,
 Joel Fernandes <joel@...lfernandes.org>, linux-trace-kernel@...r.kernel.org,
 Michael Jeanson <mjeanson@...icios.com>
Subject: Re: [PATCH v1 2/8] tracing/ftrace: guard syscall probe with
 preempt_notrace

On 2024-10-04 00:23, Steven Rostedt wrote:
> On Thu,  3 Oct 2024 11:16:32 -0400
> Mathieu Desnoyers <mathieu.desnoyers@...icios.com> wrote:
> 
>> In preparation for allowing system call enter/exit instrumentation to
>> handle page faults, make sure that ftrace can handle this change by
>> explicitly disabling preemption within the ftrace system call tracepoint
>> probes to respect the current expectations within ftrace ring buffer
>> code.
> 
> The ftrace ring buffer doesn't expect preemption being disabled before use.
> It will explicitly disable preemption.
> 
> I don't think this patch is needed.

Steve,

Look here:

static void ftrace_syscall_enter(void *data, struct pt_regs *regs, long id)
{
         struct trace_array *tr = data;
         struct trace_event_file *trace_file;
         struct syscall_trace_enter *entry;
         struct syscall_metadata *sys_data;
         struct trace_event_buffer fbuffer;
         unsigned long args[6];
         int syscall_nr;
         int size;

         syscall_nr = trace_get_syscall_nr(current, regs);
         if (syscall_nr < 0 || syscall_nr >= NR_syscalls)
                 return;

         /* Here we're inside tp handler's rcu_read_lock_sched (__DO_TRACE) */
         trace_file = rcu_dereference_sched(tr->enter_syscall_files[syscall_nr]);

^^^^ this function explicitly states that preempt needs to be disabled by
tracepoints.

         if (!trace_file)
                 return;

         if (trace_trigger_soft_disabled(trace_file))
                 return;

         sys_data = syscall_nr_to_meta(syscall_nr);
         if (!sys_data)
                 return;

         size = sizeof(*entry) + sizeof(unsigned long) * sys_data->nb_args;

         entry = trace_event_buffer_reserve(&fbuffer, trace_file, size);
^^^^ it reserves space in the ring buffer without disabling preemption explicitly.

And also:

void *trace_event_buffer_reserve(struct trace_event_buffer *fbuffer,
                                  struct trace_event_file *trace_file,
                                  unsigned long len)
{
         struct trace_event_call *event_call = trace_file->event_call;

         if ((trace_file->flags & EVENT_FILE_FL_PID_FILTER) &&
             trace_event_ignore_this_pid(trace_file))
                 return NULL;

         /*
          * If CONFIG_PREEMPTION is enabled, then the tracepoint itself disables
          * preemption (adding one to the preempt_count). Since we are
          * interested in the preempt_count at the time the tracepoint was
          * hit, we need to subtract one to offset the increment.
          */
^^^ This function also explicitly expects preemption to be disabled.

So I rest my case. The change I'm introducing for tracepoints
don't make any assumptions about whether or not each tracer require
preempt off or not: it keeps the behavior the _same_ as it was before.

Then it's up to each tracer's developer to change the behavior of their
own callbacks as they see fit. But I'm not introducing regressions in
tracers with the "big switch" change of making syscall tracepoints
faultable. This will belong to changes that are specific to each tracer.

Thanks,

Mathieu

> 
> -- Steve
> 
> 
>>
>> This change does not yet allow ftrace to take page faults per se within
>> its probe, but allows its existing probes to adapt to the upcoming
>> change.
>>
>> Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@...icios.com>
>> Acked-by: Masami Hiramatsu (Google) <mhiramat@...nel.org>
>> Cc: Michael Jeanson <mjeanson@...icios.com>
>> Cc: Steven Rostedt <rostedt@...dmis.org>
>> Cc: Masami Hiramatsu <mhiramat@...nel.org>
>> Cc: Peter Zijlstra <peterz@...radead.org>
>> Cc: Alexei Starovoitov <ast@...nel.org>
>> Cc: Yonghong Song <yhs@...com>
>> Cc: Paul E. McKenney <paulmck@...nel.org>
>> Cc: Ingo Molnar <mingo@...hat.com>
>> Cc: Arnaldo Carvalho de Melo <acme@...nel.org>
>> Cc: Mark Rutland <mark.rutland@....com>
>> Cc: Alexander Shishkin <alexander.shishkin@...ux.intel.com>
>> Cc: Namhyung Kim <namhyung@...nel.org>
>> Cc: Andrii Nakryiko <andrii.nakryiko@...il.com>
>> Cc: bpf@...r.kernel.org
>> Cc: Joel Fernandes <joel@...lfernandes.org>

-- 
Mathieu Desnoyers
EfficiOS Inc.
https://www.efficios.com


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ