[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20200416115118.867411350@infradead.org>
Date:   Thu, 16 Apr 2020 13:47:12 +0200
From:   Peter Zijlstra <peterz@...radead.org>
To:     tglx@...utronix.de, jpoimboe@...hat.com
Cc:     linux-kernel@...r.kernel.org, x86@...nel.org, peterz@...radead.org,
        mhiramat@...nel.org, mbenes@...e.cz, jthierry@...hat.com,
        alexandre.chartre@...cle.com
Subject: [PATCH v5 06/17] x86,ftrace: Shrink ftrace_regs_caller() by one byte
'Optimize' ftrace_regs_caller. Instead of comparing against an
immediate, the more natural way to test for zero on x86 is: 'test
%r,%r'.
  48 83 f8 00             cmp    $0x0,%rax
  74 49                   je     226 <ftrace_regs_call+0xa3>
  48 85 c0                test   %rax,%rax
  74 49                   je     225 <ftrace_regs_call+0xa2>
Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
---
 arch/x86/kernel/ftrace_64.S |    4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)
--- a/arch/x86/kernel/ftrace_64.S
+++ b/arch/x86/kernel/ftrace_64.S
@@ -240,8 +240,8 @@ SYM_INNER_LABEL(ftrace_regs_call, SYM_L_
 	 * See arch_ftrace_set_direct_caller().
 	 */
 	movq ORIG_RAX(%rsp), %rax
-	cmpq	$0, %rax
-	je	1f
+	testq	%rax, %rax
+	jz	1f
 
 	/* Swap the flags with orig_rax */
 	movq MCOUNT_REG_SIZE(%rsp), %rdi
Powered by blists - more mailing lists
 
