[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <202307121618.17C50DA9A@keescook>
Date: Wed, 12 Jul 2023 16:36:30 -0700
From: Kees Cook <keescook@...omium.org>
To: Steven Rostedt <rostedt@...dmis.org>
Cc: LKML <linux-kernel@...r.kernel.org>,
Linux Trace Kernel <linux-trace-kernel@...r.kernel.org>,
Masami Hiramatsu <mhiramat@...nel.org>,
Mark Rutland <mark.rutland@....com>,
Sven Schnelle <svens@...ux.ibm.com>,
linux-hardening@...r.kernel.org
Subject: Re: [PATCH] tracing: Stop FORTIFY_SOURCE complaining about stack
trace caller
On Wed, Jul 12, 2023 at 10:52:35AM -0400, Steven Rostedt wrote:
> From: "Steven Rostedt (Google)" <rostedt@...dmis.org>
>
> The stack_trace event is an event created by the tracing subsystem to
> store stack traces. It originally just contained a hard coded array of 8
> words to hold the stack, and a "size" to know how many entries are there.
> This is exported to user space as:
>
> name: kernel_stack
> ID: 4
> format:
> field:unsigned short common_type; offset:0; size:2; signed:0;
> field:unsigned char common_flags; offset:2; size:1; signed:0;
> field:unsigned char common_preempt_count; offset:3; size:1; signed:0;
> field:int common_pid; offset:4; size:4; signed:1;
>
> field:int size; offset:8; size:4; signed:1;
> field:unsigned long caller[8]; offset:16; size:64; signed:0;
>
> print fmt: "\t=> %ps\n\t=> %ps\n\t=> %ps\n" "\t=> %ps\n\t=> %ps\n\t=> %ps\n" "\t=> %ps\n\t=> %ps\n",i
> (void *)REC->caller[0], (void *)REC->caller[1], (void *)REC->caller[2],
> (void *)REC->caller[3], (void *)REC->caller[4], (void *)REC->caller[5],
> (void *)REC->caller[6], (void *)REC->caller[7]
>
> Where the user space tracers could parse the stack. The library was
> updated for this specific event to only look at the size, and not the
> array. But some older users still look at the array (note, the older code
> still checks to make sure the array fits inside the event that it read.
> That is, if only 4 words were saved, the parser would not read the fifth
> word because it will see that it was outside of the event size).
>
> This event was changed a while ago to be more dynamic, and would save a
> full stack even if it was greater than 8 words. It does this by simply
> allocating more ring buffer to hold the extra words. Then it copies in the
> stack via:
>
> memcpy(&entry->caller, fstack->calls, size);
>
> As the entry is struct stack_entry, that is created by a macro to both
> create the structure and export this to user space, it still had the caller
> field of entry defined as: unsigned long caller[8].
>
> When the stack is greater than 8, the FORTIFY_SOURCE code notices that the
> amount being copied is greater than the source array and complains about
> it. It has no idea that the source is pointing to the ring buffer with the
> required allocation.
>
> To hide this from the FORTIFY_SOURCE logic, pointer arithmetic is used:
>
> ptr = ring_buffer_event_data(event);
> entry = ptr;
> ptr += offsetof(typeof(*entry), caller);
> memcpy(ptr, fstack->calls, size);
But... Why are you lying to the compiler? Why not just make this
dynamically sized for real? It's not a "struct stack_entry" if it might
be bigger.
Just create a new struct that isn't lying? (Dealing with the "minimum
size" issue for a dynamic array is usually done with unions, but
ftrace's structs are ... different. As such, I just added a one-off
union.) Here, and you can be the first user of __counted_by too:
diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
index 4529e264cb86..40935578c365 100644
--- a/kernel/trace/trace.c
+++ b/kernel/trace/trace.c
@@ -3108,6 +3108,14 @@ struct ftrace_stacks {
static DEFINE_PER_CPU(struct ftrace_stacks, ftrace_stacks);
static DEFINE_PER_CPU(int, ftrace_stack_reserve);
+union stack_entry_dynamic {
+ struct stack_entry entry;
+ struct {
+ int size;
+ unsigned long caller[] __counted_by(size);
+ };
+};
+
static void __ftrace_trace_stack(struct trace_buffer *buffer,
unsigned int trace_ctx,
int skip, struct pt_regs *regs)
@@ -3116,7 +3124,7 @@ static void __ftrace_trace_stack(struct trace_buffer *buffer,
struct ring_buffer_event *event;
unsigned int size, nr_entries;
struct ftrace_stack *fstack;
- struct stack_entry *entry;
+ union stack_entry_dynamic *entry;
int stackidx;
/*
@@ -3155,16 +3163,15 @@ static void __ftrace_trace_stack(struct trace_buffer *buffer,
nr_entries = stack_trace_save(fstack->calls, size, skip);
}
- size = nr_entries * sizeof(unsigned long);
event = __trace_buffer_lock_reserve(buffer, TRACE_STACK,
- (sizeof(*entry) - sizeof(entry->caller)) + size,
+ struct_size(entry, caller, nr_entries),
trace_ctx);
if (!event)
goto out;
entry = ring_buffer_event_data(event);
- memcpy(&entry->caller, fstack->calls, size);
entry->size = nr_entries;
+ memcpy(entry->caller, fstack->calls, flex_array_size(entry, caller, nr_entries));
if (!call_filter_check_discard(call, entry, buffer, event))
__buffer_unlock_commit(buffer, event);
--
Kees Cook
Powered by blists - more mailing lists