lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID:  <173807862643.1525539.5494079998018402469.stgit@mhiramat.roam.corp.google.com>
Date: Wed, 29 Jan 2025 00:37:06 +0900
From: "Masami Hiramatsu (Google)" <mhiramat@...nel.org>
To: Steven Rostedt <rostedt@...dmis.org>
Cc: Masami Hiramatsu <mhiramat@...nel.org>,
	Mathieu Desnoyers <mathieu.desnoyers@...icios.com>,
	Luis Chamberlain <mcgrof@...nel.org>,
	Petr Pavlu <petr.pavlu@...e.com>,
	Sami Tolvanen <samitolvanen@...gle.com>,
	Daniel Gomez <da.gomez@...sung.com>,
	linux-kernel@...r.kernel.org,
	linux-trace-kernel@...r.kernel.org,
	linux-modules@...r.kernel.org
Subject: [RFC PATCH 1/3] tracing: Record stacktrace as the offset from _stext

From: Masami Hiramatsu (Google) <mhiramat@...nel.org>

Record kernel stacktrace as the offset from _stext so that it does
not affected by KASLR.

For the persistent ring buffer, decoding the stacktrace entries
requires kallsyms in the previous boot because the kernel symbols
will have random offset for each boot by KASLR. But this is not
useful because we always need to save the kallsyms. Alternatively,
we can record the stacktrace entries as the offset value from
_stext. In this case, we can use System.map or nm for the vmlinux
to decode the entries.

Signed-off-by: Masami Hiramatsu (Google) <mhiramat@...nel.org>
---
 kernel/trace/trace.c        |    6 ++++++
 kernel/trace/trace_output.c |    2 +-
 2 files changed, 7 insertions(+), 1 deletion(-)

diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
index 1496a5ac33ae..8e86a43b368c 100644
--- a/kernel/trace/trace.c
+++ b/kernel/trace/trace.c
@@ -2973,8 +2973,14 @@ static void __ftrace_trace_stack(struct trace_array *tr,
 		for (int i = 0; i < nr_entries; i++) {
 			if (calls[i] >= tramp_start && calls[i] < tramp_end)
 				calls[i] = FTRACE_TRAMPOLINE_MARKER;
+			else
+				calls[i] -= (unsigned long)_stext;
 		}
 	}
+#else
+	/* Adjsut entries as the offset from _stext, instead of raw address. */
+	for (int i = 0; i < nr_entries; i++)
+		fstack->calls[i] -= (unsigned long)_stext;
 #endif
 
 	event = __trace_buffer_lock_reserve(buffer, TRACE_STACK,
diff --git a/kernel/trace/trace_output.c b/kernel/trace/trace_output.c
index 03d56f711ad1..497872df48f6 100644
--- a/kernel/trace/trace_output.c
+++ b/kernel/trace/trace_output.c
@@ -1248,7 +1248,7 @@ static enum print_line_t trace_stack_print(struct trace_iterator *iter,
 	struct trace_seq *s = &iter->seq;
 	unsigned long *p;
 	unsigned long *end;
-	long delta = iter->tr->text_delta;
+	long delta = (unsigned long)_stext + iter->tr->text_delta;
 
 	trace_assign_type(field, iter->ent);
 	end = (unsigned long *)((long)iter->ent + iter->ent_size);


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ