lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 12 Jul 2023 16:31:15 +0200
From:   Sven Schnelle <svens@...ux.ibm.com>
To:     Steven Rostedt <rostedt@...dmis.org>
Cc:     linux-kernel@...r.kernel.org, keescook@...omium.org
Subject: Re: [PATCH] tracing: fix memcpy size when copying stack entries

Hi Steven,

Steven Rostedt <rostedt@...dmis.org> writes:

> On Wed, 12 Jul 2023 16:06:27 +0200
> Sven Schnelle <svens@...ux.ibm.com> wrote:
>
>> > No, still getting the same warning:
>> >
>> > [    2.302776] memcpy: detected field-spanning write (size 104) of single field "stack" at kernel/trace/trace.c:3178 (size 64)  
>> 
>> BTW, i'm seeing the same error on x86 with current master when
>> CONFIG_FORTIFY_SOURCE=y and CONFIG_SCHED_TRACER=y:
>
> As I don't know how the fortifier works, nor what exactly it is checking,
> do you have any idea on how to quiet it?
>
> This is a false positive, as I described before.

The "problem" is that struct stack_entry is

struct stack_entry {
       int size;
       unsigned long caller[8];
};

So, as you explained, the ringbuffer code allocates some space after the
struct for additional entries:

struct stack_entry 1;
<additional space for 1>
struct stack_entry 2;
<additional space for 2>
...

But the struct member that is passed to memcpy still has the type
information 'caller is an array with 8 members of 8 bytes', so memcpy
fortify complains. I'm not sure whether we can blame the compiler or
the fortify code here.

One (ugly and whitespace damaged) workaround is:

diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
index 35b11f5a9519..31acd8a6b97e 100644
--- a/kernel/trace/trace.c
+++ b/kernel/trace/trace.c
@@ -3170,7 +3170,8 @@ static void __ftrace_trace_stack(struct trace_buffer *buffer,
                goto out;
        entry = ring_buffer_event_data(event);
 
-       memcpy(&entry->caller, fstack->calls, size);
+       void *p = entry + offsetof(struct stack_entry, caller);
+       memcpy(p, fstack->calls, size);
        entry->size = nr_entries;
 
        if (!call_filter_check_discard(call, entry, buffer, event))


So with that offsetof calculation the compiler doesn't know about the 8
entries * 8 bytes limitation. Adding Kees to the thread, maybe he knows
some way.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ