[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20241216233704.3208607-5-dwmw2@infradead.org>
Date: Mon, 16 Dec 2024 23:24:11 +0000
From: David Woodhouse <dwmw2@...radead.org>
To: Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>,
Borislav Petkov <bp@...en8.de>,
Dave Hansen <dave.hansen@...ux.intel.com>,
x86@...nel.org,
"H. Peter Anvin" <hpa@...or.com>,
Eric Biederman <ebiederm@...ssion.com>,
David Woodhouse <dwmw@...zon.co.uk>,
Sourabh Jain <sourabhjain@...ux.ibm.com>,
Hari Bathini <hbathini@...ux.ibm.com>,
Michael Ellerman <mpe@...erman.id.au>,
Thomas Zimmermann <tzimmermann@...e.de>,
Andrew Morton <akpm@...ux-foundation.org>,
Baoquan He <bhe@...hat.com>,
Yuntao Wang <ytcoode@...il.com>,
David Kaplan <david.kaplan@....com>,
Tao Liu <ltao@...hat.com>,
"Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>,
Kai Huang <kai.huang@...el.com>,
Ard Biesheuvel <ardb@...nel.org>,
Josh Poimboeuf <jpoimboe@...nel.org>,
Breno Leitao <leitao@...ian.org>,
Wei Yang <richard.weiyang@...il.com>,
Rong Xu <xur@...gle.com>,
Thomas Weißschuh <thomas.weissschuh@...utronix.de>,
linux-kernel@...r.kernel.org,
kexec@...ts.infradead.org,
Simon Horman <horms@...nel.org>,
Dave Young <dyoung@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
bsz@...zon.de,
nathan@...nel.org
Subject: [PATCH 4/9] x86/kexec: Fix stack and handling of re-entry point for ::preserve_context
From: David Woodhouse <dwmw@...zon.co.uk>
A ::preserve_context kimage can be invoked more than once, and the entry
point can be different every time. When the callee returns to the kernel,
it leaves the address of its entry point for next time on the stack.
That being the case, one might reasonably assume that the caller would
allocate space for it on the stack fram before actually performing the
'call' into the callee.
Apparently not, though. Ever since the kjump code was first added in
2009, it has set up a *new* stack at the top of the swap_page scratch
page, then just performed the 'call' without allocating any space for
the re-entry address to be returned. It then reads the re-entry point
for next time from 0(%rsp) which is actually the first qword of the page
*after* the swap page, which might not exist at all! And if the callee
has written to that, then it will have corrupted memory it doesn't own.
Correct this by pushing the entry point of the callee onto the stack
before calling it. The callee may then adjust it, or not, as it sees fit,
and subsequent invocations should work correctly either way.
Remove a stray push of zero to the *relocate_kernel* stack, which may
have been intended for this purpose, but which was actually just noise.
Also, loading the stack for the callee relied on the address of the swap
page being in %r10 without ever documenting that fact. Recent code
changes made that no longer true, so load it directly from the local
kexec_pa_swap_page variable instead.
Fixes: b3adabae8a96 ("x86/kexec: Drop page_list argument from relocate_kernel()")
Signed-off-by: David Woodhouse <dwmw@...zon.co.uk>
---
arch/x86/kernel/relocate_kernel_64.S | 12 +++++++++---
1 file changed, 9 insertions(+), 3 deletions(-)
diff --git a/arch/x86/kernel/relocate_kernel_64.S b/arch/x86/kernel/relocate_kernel_64.S
index 0d6fce1e0a32..b680f24896b8 100644
--- a/arch/x86/kernel/relocate_kernel_64.S
+++ b/arch/x86/kernel/relocate_kernel_64.S
@@ -113,8 +113,6 @@ SYM_CODE_START_LOCAL_NOALIGN(identity_mapped)
* %r13 original CR4 when relocate_kernel() was invoked
*/
- /* set return address to 0 if not preserving context */
- pushq $0
/* store the start address on the stack */
pushq %rdx
@@ -208,12 +206,19 @@ SYM_CODE_START_LOCAL_NOALIGN(identity_mapped)
.Lrelocate:
popq %rdx
+
+ /* Use the swap page for the callee's stack */
+ movq kexec_pa_swap_page(%rip), %r10
leaq PAGE_SIZE(%r10), %rsp
+
+ /* push the existing entry point onto the callee's stack */
+ pushq %rdx
+
ANNOTATE_RETPOLINE_SAFE
call *%rdx
/* get the re-entry point of the peer system */
- movq 0(%rsp), %rbp
+ popq %rbp
leaq relocate_kernel(%rip), %r8
movq kexec_pa_swap_page(%rip), %r10
movq pa_backup_pages_map(%rip), %rdi
@@ -247,6 +252,7 @@ SYM_CODE_START_LOCAL_NOALIGN(virtual_mapped)
lgdt saved_context_gdt_desc(%rax)
#endif
+ /* relocate_kernel() returns the re-entry point for next time */
movq %rbp, %rax
popf
--
2.47.0
Powered by blists - more mailing lists