lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20170929092335.2744-1-vbabka@suse.cz>
Date:   Fri, 29 Sep 2017 11:23:35 +0200
From:   Vlastimil Babka <vbabka@...e.cz>
To:     Thomas Gleixner <tglx@...utronix.de>,
        Ingo Molnar <mingo@...hat.com>,
        "H . Peter Anvin" <hpa@...or.com>
Cc:     x86@...nel.org, Josh Poimboeuf <jpoimboe@...hat.com>,
        Miroslav Benes <mbenes@...e.cz>, linux-kernel@...r.kernel.org,
        Vlastimil Babka <vbabka@...e.cz>
Subject: [PATCH v2] x86, stacktrace: avoid recording save_stack_trace wrappers

The save_stack_trace() and save_stack_trace_tsk() wrappers of
__save_stack_trace() add themselves to the call stack, and thus appear in the
recorded stacktraces. This is redundant and wasteful when we have limited space
to record the useful part of the backtrace with e.g. page_owner functionality.

Fix this by making sure __save_stack_trace() is noinline (which matches the
current gcc decision) and bumping the skip in the wrappers
(save_stack_trace_tsk() only when called for the current task). This is similar
to what was done for arm in 3683f44c42e9 ("ARM: stacktrace: avoid listing
stacktrace functions in stacktrace") and is pending for arm64.

Also make sure that __save_stack_trace_reliable() doesn't get this problem in
the future by marking it __always_inline (which matches current gcc decision),
per Josh Poimboeuf.

Signed-off-by: Vlastimil Babka <vbabka@...e.cz>
---
v2: save_stack_trace_tsk(): skip only when tsk == current; make
    __save_stack_trace_reliable() __always_inline (both suggested by Josh)
    
 arch/x86/kernel/stacktrace.c | 10 +++++++---
 1 file changed, 7 insertions(+), 3 deletions(-)

diff --git a/arch/x86/kernel/stacktrace.c b/arch/x86/kernel/stacktrace.c
index 8dabd7bf1673..77835bc021c7 100644
--- a/arch/x86/kernel/stacktrace.c
+++ b/arch/x86/kernel/stacktrace.c
@@ -30,7 +30,7 @@ static int save_stack_address(struct stack_trace *trace, unsigned long addr,
 	return 0;
 }
 
-static void __save_stack_trace(struct stack_trace *trace,
+static void noinline __save_stack_trace(struct stack_trace *trace,
 			       struct task_struct *task, struct pt_regs *regs,
 			       bool nosched)
 {
@@ -56,6 +56,7 @@ static void __save_stack_trace(struct stack_trace *trace,
  */
 void save_stack_trace(struct stack_trace *trace)
 {
+	trace->skip++;
 	__save_stack_trace(trace, current, NULL, false);
 }
 EXPORT_SYMBOL_GPL(save_stack_trace);
@@ -70,6 +71,8 @@ void save_stack_trace_tsk(struct task_struct *tsk, struct stack_trace *trace)
 	if (!try_get_task_stack(tsk))
 		return;
 
+	if (tsk == current)
+		trace->skip++;
 	__save_stack_trace(trace, tsk, NULL, true);
 
 	put_task_stack(tsk);
@@ -88,8 +91,9 @@ EXPORT_SYMBOL_GPL(save_stack_trace_tsk);
 	}							\
 })
 
-static int __save_stack_trace_reliable(struct stack_trace *trace,
-				       struct task_struct *task)
+static int __always_inline
+__save_stack_trace_reliable(struct stack_trace *trace,
+			    struct task_struct *task)
 {
 	struct unwind_state state;
 	struct pt_regs *regs;
-- 
2.14.1

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ