lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20251006221043.07cdb0fd@gandalf.local.home>
Date: Mon, 6 Oct 2025 22:10:43 -0400
From: Steven Rostedt <rostedt@...dmis.org>
To: Runping Lai <runpinglai@...gle.com>
Cc: Masami Hiramatsu <mhiramat@...nel.org>, Mathieu Desnoyers
 <mathieu.desnoyers@...icios.com>, Wattson CI <wattson-external@...gle.com>,
 kernel-team@...roid.com, linux-kernel@...r.kernel.org,
 linux-trace-kernel@...r.kernel.org, Luo Gengkun
 <luogengkun@...weicloud.com>, Linus Torvalds
 <torvalds@...ux-foundation.org>
Subject: Re: [PATCH v1] Revert "tracing: Fix tracing_marker may trigger page
 fault during preempt_disable"

On Tue,  7 Oct 2025 00:34:17 +0000
Runping Lai <runpinglai@...gle.com> wrote:

> This reverts commit 3d62ab32df065e4a7797204a918f6489ddb8a237.
> 
> It's observed on Pixel 6 that this commit causes a severe functional
> regression: all user-space writes to trace_marker now fail. The write
> does not goes through at all. The error is observed in the shell as
> 'printf: write: Bad address'. This breaks a primary ftrace interface
> for user-space debugging and profiling. In kernel trace file, it's
> logged as 'tracing_mark_write: <faulted>'. After reverting this commit,
> functionality is restored.

This is very interesting. The copy is being done in an atomic context. If
the fault had to do anything other than update a page table, it is likely
not to do anything and return a fault.

What preemption model is Pixel 6 running in? CONFIG_PREEMPT_NONE?

The original code is buggy, but if this is causing a regression, then we
likely need to do something else, like copy in a pre-allocated buffer?

-- Steve


> 
> Signed-off-by: Runping Lai <runpinglai@...gle.com>
> Reported-by: Wattson CI <wattson-external@...gle.com>
> ---
>  kernel/trace/trace.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
> index 156e7e0bf559..bb9a6284a629 100644
> --- a/kernel/trace/trace.c
> +++ b/kernel/trace/trace.c
> @@ -7213,7 +7213,7 @@ static ssize_t write_marker_to_buffer(struct trace_array *tr, const char __user
>  	entry = ring_buffer_event_data(event);
>  	entry->ip = ip;
>  
> -	len = copy_from_user_nofault(&entry->buf, ubuf, cnt);
> +	len = __copy_from_user_inatomic(&entry->buf, ubuf, cnt);
>  	if (len) {
>  		memcpy(&entry->buf, FAULTED_STR, FAULTED_SIZE);
>  		cnt = FAULTED_SIZE;
> @@ -7310,7 +7310,7 @@ static ssize_t write_raw_marker_to_buffer(struct trace_array *tr,
>  
>  	entry = ring_buffer_event_data(event);
>  
> -	len = copy_from_user_nofault(&entry->id, ubuf, cnt);
> +	len = __copy_from_user_inatomic(&entry->id, ubuf, cnt);
>  	if (len) {
>  		entry->id = -1;
>  		memcpy(&entry->buf, FAULTED_STR, FAULTED_SIZE);


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ