lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1241735222-6640-12-git-send-email-hpa@linux.intel.com>
Date:	Thu,  7 May 2009 15:26:59 -0700
From:	"H. Peter Anvin" <hpa@...ux.intel.com>
To:	linux-kernel@...r.kernel.org
Cc:	vgoyal@...hat.com, hbabu@...ibm.com, kexec@...ts.infradead.org,
	ying.huang@...el.com, mingo@...e.hu, tglx@...utronix.de,
	ebiederm@...ssion.com, sam@...nborg.org,
	"H. Peter Anvin" <hpa@...or.com>
Subject: [PATCH 11/14] x86, boot: use rep movsq to move kernel on 64 bits

From: H. Peter Anvin <hpa@...or.com>

rep movsq is the architecturally preferred way to move a block of
data.  It isn't the fastest way on all existing CPUs, but it it likely
to be in the future, and perhaps more importantly, we should encourage
the architecturally right thing to do.

This means saving and restoring %rsi around the copy code, which is
easily done by setting up the stack early.  However, we should not
copy .bss (which we are about to zero anyway); we should only copy up
to the *beginning* of .bss (just as on 32 bits.)

This also makes the code quite a bit more similar between 32 and 64 bits.

[ Impact: trivial optimization ]

Signed-off-by: H. Peter Anvin <hpa@...or.com>
---
 arch/x86/boot/compressed/head_64.S |   41 ++++++++++++++++++++---------------
 1 files changed, 23 insertions(+), 18 deletions(-)

diff --git a/arch/x86/boot/compressed/head_64.S b/arch/x86/boot/compressed/head_64.S
index 2678fdf..8bc8ed8 100644
--- a/arch/x86/boot/compressed/head_64.S
+++ b/arch/x86/boot/compressed/head_64.S
@@ -220,18 +220,30 @@ ENTRY(startup_64)
 #endif
 	leaq	z_extract_offset(%rbp), %rbx
 
-/* Copy the compressed kernel to the end of our buffer
+/*
+ * Set up the stack
+ */
+	leaq boot_stack_end(%rbx), %rsp
+
+/*
+ * Zero EFLAGS after setting rsp
+ */
+	pushq	$0
+	popfq
+
+/*
+ * Copy the compressed kernel to the end of our buffer
  * where decompression in place becomes safe.
  */
-	leaq	_end_before_pgt(%rip), %r8
-	leaq	_end_before_pgt(%rbx), %r9
-	movq	$_end_before_pgt /* - $startup_32 */, %rcx
-1:	subq	$8, %r8
-	subq	$8, %r9
-	movq	0(%r8), %rax
-	movq	%rax, 0(%r9)
-	subq	$8, %rcx
-	jnz	1b
+	pushq	%rsi		/* Kernel structure pointer */
+	leaq	(_bss-8)(%rip), %rsi
+	leaq	(_bss-8)(%rbx), %rdi
+	movq	$_bss /* - $startup_32 */, %rcx
+	shrq	$3, %rcx
+	std
+	rep	movsq
+	cld
+	popq	%rsi
 
 /*
  * Jump to the relocated address.
@@ -243,7 +255,7 @@ ENTRY(startup_64)
 relocated:
 
 /*
- * Clear BSS
+ * Clear BSS (stack is empty at this point)
  */
 	xorl	%eax, %eax
 	leaq    _edata(%rip), %rdi
@@ -253,13 +265,6 @@ relocated:
 	cld
 	rep	stosq
 
-	/* Setup the stack */
-	leaq	boot_stack_end(%rip), %rsp
-
-	/* zero EFLAGS after setting rsp */
-	pushq	$0
-	popfq
-
 /*
  * Do the decompression, and jump to the new kernel..
  */
-- 
1.6.0.6

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ