[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20241205153343.3275139-13-dwmw2@infradead.org>
Date: Thu, 5 Dec 2024 15:05:18 +0000
From: David Woodhouse <dwmw2@...radead.org>
To: kexec@...ts.infradead.org
Cc: Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>,
Borislav Petkov <bp@...en8.de>,
Dave Hansen <dave.hansen@...ux.intel.com>,
x86@...nel.org,
"H. Peter Anvin" <hpa@...or.com>,
David Woodhouse <dwmw@...zon.co.uk>,
"Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>,
Kai Huang <kai.huang@...el.com>,
Nikolay Borisov <nik.borisov@...e.com>,
linux-kernel@...r.kernel.org,
Simon Horman <horms@...nel.org>,
Dave Young <dyoung@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
jpoimboe@...nel.org,
bsz@...zon.de
Subject: [PATCH v5 12/20] x86/kexec: Clean up register usage in relocate_kernel()
From: David Woodhouse <dwmw@...zon.co.uk>
The memory encryption flag is passed in %r8 because that's where the
calling convention puts it. Instead of moving it to %r12 and then using
%r8 for other things, just leave it in %r8 and use other registers
instead.
Signed-off-by: David Woodhouse <dwmw@...zon.co.uk>
---
arch/x86/kernel/relocate_kernel_64.S | 17 ++++++-----------
1 file changed, 6 insertions(+), 11 deletions(-)
diff --git a/arch/x86/kernel/relocate_kernel_64.S b/arch/x86/kernel/relocate_kernel_64.S
index 288dfc08c63d..b24198eb1fe9 100644
--- a/arch/x86/kernel/relocate_kernel_64.S
+++ b/arch/x86/kernel/relocate_kernel_64.S
@@ -79,24 +79,18 @@ SYM_CODE_START_NOALIGN(relocate_kernel)
movq %cr4, %r13
movq %r13, saved_cr4(%rip)
- /* Save SME active flag */
- movq %r8, %r12
-
/* save indirection list for jumping back */
movq %rdi, pa_backup_pages_map(%rip)
/* Save the preserve_context to %r11 as swap_pages clobbers %rcx. */
movq %rcx, %r11
- /* Physical address of control page */
- movq %rsi, %r8
-
/* setup a new stack at the end of the physical control page */
- lea PAGE_SIZE(%r8), %rsp
+ lea PAGE_SIZE(%rsi), %rsp
/* jump to identity mapped page */
- addq $(identity_mapped - relocate_kernel), %r8
- pushq %r8
+ addq $(identity_mapped - relocate_kernel), %rsi
+ pushq %rsi
ANNOTATE_UNRET_SAFE
ret
int3
@@ -107,8 +101,9 @@ SYM_CODE_START_LOCAL_NOALIGN(identity_mapped)
/*
* %rdi indirection page
* %rdx start address
+ * %r8 host_mem_enc_active
+ * %r9 page table page
* %r11 preserve_context
- * %r12 host_mem_enc_active
* %r13 original CR4 when relocate_kernel() was invoked
*/
@@ -161,7 +156,7 @@ SYM_CODE_START_LOCAL_NOALIGN(identity_mapped)
* entries that will conflict with the now unencrypted memory
* used by kexec. Flush the caches before copying the kernel.
*/
- testq %r12, %r12
+ testq %r8, %r8
jz .Lsme_off
wbinvd
.Lsme_off:
--
2.47.0
Powered by blists - more mailing lists