lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <cb594cb0-f2e7-887e-e454-03971cfcbc53@linux.microsoft.com>
Date:   Fri, 9 Apr 2021 12:16:26 -0500
From:   "Madhavan T. Venkataraman" <madvenka@...ux.microsoft.com>
To:     Mark Rutland <mark.rutland@....com>
Cc:     broonie@...nel.org, jpoimboe@...hat.com, jthierry@...hat.com,
        catalin.marinas@....com, will@...nel.org,
        linux-arm-kernel@...ts.infradead.org,
        live-patching@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [RFC PATCH v2 0/4] arm64: Implement stack trace reliability
 checks



On 4/9/21 7:09 AM, Mark Rutland wrote:
> Hi Madhavan,
> 
> I've noted some concerns below. At a high-level, I'm not keen on the
> blacklisting approach, and I think there's some other preparatory work
> that would be more valuable in the short term.
> 

Some kind of blacklisting has to be done whichever way you do it.

> On Mon, Apr 05, 2021 at 03:43:09PM -0500, madvenka@...ux.microsoft.com wrote:
>> From: "Madhavan T. Venkataraman" <madvenka@...ux.microsoft.com>
>>
>> There are a number of places in kernel code where the stack trace is not
>> reliable. Enhance the unwinder to check for those cases and mark the
>> stack trace as unreliable. Once all of the checks are in place, the unwinder
>> can provide a reliable stack trace. But before this can be used for livepatch,
>> some other entity needs to guarantee that the frame pointers are all set up
>> correctly in kernel functions. objtool is currently being worked on to
>> fill that gap.
>>
>> Except for the return address check, all the other checks involve checking
>> the return PC of every frame against certain kernel functions. To do this,
>> implement some infrastructure code:
>>
>> 	- Define a special_functions[] array and populate the array with
>> 	  the special functions
> 
> I'm not too keen on having to manually collate this within the unwinder,
> as it's very painful from a maintenance perspective. I'd much rather we
> could associate this information with the implementations of these
> functions, so that they're more likely to stay in sync.
> 
> Further, I believe all the special cases are assembly functions, and
> most of those are already in special sections to begin with. I reckon
> it'd be simpler and more robust to reject unwinding based on the
> section. If we need to unwind across specific functions in those
> sections, we could opt-in with some metadata. So e.g. we could reject
> all functions in ".entry.text", special casing the EL0 entry functions
> if necessary.
> 

Yes. I have already agreed that using sections is the way to go. I am working
on that now.

> As I mentioned before, I'm currently reworking the entry assembly to
> make this simpler to do. I'd prefer to not make invasive changes in that
> area until that's sorted.
> 

I don't plan to make any invasive changes. But a couple of cosmetic changes may be
necessary. I don't know yet. But I will keep in mind that you don't want
any invasive changes there.

> I think there's a lot more code that we cannot unwind, e.g. KVM
> exception code, or almost anything marked with SYM_CODE_END().
> 

As Mark Brown suggested, I will take a look at all code that is marked as
SYM_CODE. His idea of placing all SYM_CODE in a separate section and blacklisting
that to begin with and refining things as we go along appears to me to be
a reasonable approach.

>> 	- Using kallsyms_lookup(), lookup the symbol table entries for the
>> 	  functions and record their address ranges
>>
>> 	- Define an is_reliable_function(pc) to match a return PC against
>> 	  the special functions.
>>
>> The unwinder calls is_reliable_function(pc) for every return PC and marks
>> the stack trace as reliable or unreliable accordingly.
>>
>> Return address check
>> ====================
>>
>> Check the return PC of every stack frame to make sure that it is a valid
>> kernel text address (and not some generated code, for example).
>>
>> Detect EL1 exception frame
>> ==========================
>>
>> EL1 exceptions can happen on any instruction including instructions in
>> the frame pointer prolog or epilog. Depending on where exactly they happen,
>> they could render the stack trace unreliable.
>>
>> Add all of the EL1 exception handlers to special_functions[].
>>
>> 	- el1_sync()
>> 	- el1_irq()
>> 	- el1_error()
>> 	- el1_sync_invalid()
>> 	- el1_irq_invalid()
>> 	- el1_fiq_invalid()
>> 	- el1_error_invalid()
>>
>> Detect ftrace frame
>> ===================
>>
>> When FTRACE executes at the beginning of a traced function, it creates two
>> frames and calls the tracer function:
>>
>> 	- One frame for the traced function
>>
>> 	- One frame for the caller of the traced function
>>
>> That gives a sensible stack trace while executing in the tracer function.
>> When FTRACE returns to the traced function, the frames are popped and
>> everything is back to normal.
>>
>> However, in cases like live patch, the tracer function redirects execution
>> to a different function. When FTRACE returns, control will go to that target
>> function. A stack trace taken in the tracer function will not show the target
>> function. The target function is the real function that we want to track.
>> So, the stack trace is unreliable.
> 
> This doesn't match my understanding of the reliable stacktrace
> requirements, but I might have misunderstood what you're saying here.
> 
> IIUC what you're describing here is:
> 
> 1) A calls B
> 2) B is traced
> 3) tracer replaces B with TARGET
> 4) tracer returns to TARGET
> 
> ... and if a stacktrace is taken at step 3 (before the return address is
> patched), the trace will show B rather than TARGET.
> 
> My understanding is that this is legitimate behaviour.
> 

My understanding is as follows (correct me if I am wrong):

- Before B is traced, the situation is "A calls B".

- A trace is placed on B to redirect execution to TARGET. Semantically, it
  becomes "A calls TARGET" beyond that point and B is irrelevant.

- But temporarily, the stack trace will show A -> B.

>> To detect stack traces from a tracer function, add the following to
>> special_functions[]:
>>
>> 	- ftrace_call + 4
>>
>> ftrace_call is the label at which the tracer function is patched in. So,
>> ftrace_call + 4 is its return address. This is what will show up in a
>> stack trace taken from the tracer function.
>>
>> When Function Graph Tracing is on, ftrace_graph_caller is patched in
>> at the label ftrace_graph_call. If a tracer function called before it has
>> redirected execution as mentioned above, the stack traces taken from within
>> ftrace_graph_caller will also be unreliable for the same reason as mentioned
>> above. So, add ftrace_graph_caller to special_functions[] as well.
>>
>> Also, the Function Graph Tracer modifies the return address of a traced
>> function to a return trampoline (return_to_handler()) to gather tracing
>> data on function return. Stack traces taken from the traced function and
>> functions it calls will not show the original caller of the traced function.
>> The unwinder handles this case by getting the original caller from FTRACE.
>>
>> However, stack traces taken from the trampoline itself and functions it calls
>> are unreliable as the original return address may not be available in
>> that context. This is because the trampoline calls FTRACE to gather trace
>> data as well as to obtain the actual return address and FTRACE discards the
>> record of the original return address along the way.
> 
> The reason we cannot unwind the trampolines in the usual way is because
> they are not AAPCS compliant functions. We don't discard the original
> return address, but it's not in the usual location.  With care, we could
> write a special case unwinder for them. Note that we also cannot unwind
> from any PLT on the way to the trampolines, so we'd also need to
> identify those.  Luckily we're in charge of creating those, and (for
> now) we only need to care about the module PLTs.
> 
> The bigger problem is return_to_handler, since there's a transient
> period when C code removes the return address from the graph return
> stack before passing this to assembly in a register, and so we can't
> reliably find the correct return address during this period. With care
> we could special case unwinding immediately before/after this.
> 

This is what I meant when I said "as the original return address may not be available in
that context" because the original address is popped off the return address stack
by the ftrace code called from the trampoline.

> If we could find a way to restructure return_to_handler such that we can
> reliably find the correct return address, that would be a useful
> improvement today, and would mean that we don't have to blacklist it for
> reliable stacktrace.
> 

Agreed. But until then it needs to be blacklisted. Rather than wait for that restructuring
to be done, we could initially blacklist it and remove the blacklist if and when the
restructuring is done.

Thanks.

Madhavan

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ