[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1433339930-20880-2-git-send-email-dvlasenk@redhat.com>
Date: Wed, 3 Jun 2015 15:58:50 +0200
From: Denys Vlasenko <dvlasenk@...hat.com>
To: Ingo Molnar <mingo@...nel.org>
Cc: Denys Vlasenko <dvlasenk@...hat.com>,
Josh Triplett <josh@...htriplett.org>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Steven Rostedt <rostedt@...dmis.org>,
Borislav Petkov <bp@...en8.de>,
"H. Peter Anvin" <hpa@...or.com>,
Andy Lutomirski <luto@...capital.net>,
Oleg Nesterov <oleg@...hat.com>,
Frederic Weisbecker <fweisbec@...il.com>,
Alexei Starovoitov <ast@...mgrid.com>,
Will Drewry <wad@...omium.org>,
Kees Cook <keescook@...omium.org>, x86@...nel.org,
linux-kernel@...r.kernel.org
Subject: [PATCH 2/2] x86/asm/entry/32: Remove unnecessary optimization in stub32_clone
Really swap arguments #4 and #5 in stub32_clone instead of "optimizing"
it into a move.
Yes, tls_val is currently unused. Yes, on some CPUs XCHG is a little bit
more expensive than MOV. But a cycle or two on an expensive syscall like
clone() is way below noise floor, and this optimization is simply not worth
the obfuscation of logic.
Signed-off-by: Denys Vlasenko <dvlasenk@...hat.com>
CC: Josh Triplett <josh@...htriplett.org>
CC: Linus Torvalds <torvalds@...ux-foundation.org>
CC: Steven Rostedt <rostedt@...dmis.org>
CC: Ingo Molnar <mingo@...nel.org>
CC: Borislav Petkov <bp@...en8.de>
CC: "H. Peter Anvin" <hpa@...or.com>
CC: Andy Lutomirski <luto@...capital.net>
CC: Oleg Nesterov <oleg@...hat.com>
CC: Frederic Weisbecker <fweisbec@...il.com>
CC: Alexei Starovoitov <ast@...mgrid.com>
CC: Will Drewry <wad@...omium.org>
CC: Kees Cook <keescook@...omium.org>
CC: x86@...nel.org
CC: linux-kernel@...r.kernel.org
---
This is a resend.
There was a patch by Josh Triplett
"x86: Opt into HAVE_COPY_THREAD_TLS, for both 32-bit and 64-bit"
sent on May 11,
which does the same thing as part of a bigger cleanup.
He was supportive of this patch because of comments.
He will simply have to drop one hunk from his patch.
arch/x86/ia32/ia32entry.S | 6 ++----
1 file changed, 2 insertions(+), 4 deletions(-)
diff --git a/arch/x86/ia32/ia32entry.S b/arch/x86/ia32/ia32entry.S
index 8e72256..0c302d0 100644
--- a/arch/x86/ia32/ia32entry.S
+++ b/arch/x86/ia32/ia32entry.S
@@ -567,11 +567,9 @@ GLOBAL(stub32_clone)
* 32-bit clone API is clone(..., int tls_val, int *child_tidptr).
* 64-bit clone API is clone(..., int *child_tidptr, int tls_val).
* Native 64-bit kernel's sys_clone() implements the latter.
- * We need to swap args here. But since tls_val is in fact ignored
- * by sys_clone(), we can get away with an assignment
- * (arg4 = arg5) instead of a full swap:
+ * We need to swap args here:
*/
- mov %r8, %rcx
+ xchg %r8, %rcx
jmp ia32_ptregs_common
ALIGN
--
1.8.1.4
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists