[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20190312170352.778671364@linuxfoundation.org>
Date: Tue, 12 Mar 2019 10:07:14 -0700
From: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
To: linux-kernel@...r.kernel.org
Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
stable@...r.kernel.org, Wei Huang <wei@...hat.com>,
Thomas Gleixner <tglx@...utronix.de>,
"Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>,
bp@...en8.de, hpa@...or.com, Sasha Levin <sashal@...nel.org>
Subject: [PATCH 4.20 054/171] x86/boot/compressed/64: Set EFER.LME=1 in 32-bit trampoline before returning to long mode
4.20-stable review patch. If anyone has any objections, please let me know.
------------------
[ Upstream commit b677dfae5aa197afc5191755a76a8727ffca538a ]
In some old AMD KVM implementation, guest's EFER.LME bit is cleared by KVM
when the hypervsior detects that the guest sets CR0.PG to 0. This causes
the guest OS to reboot when it tries to return from 32-bit trampoline code
because the CPU is in incorrect state: CR4.PAE=1, CR0.PG=1, CS.L=1, but
EFER.LME=0. As a precaution, set EFER.LME=1 as part of long mode
activation procedure. This extra step won't cause any harm when Linux is
booted on a bare-metal machine.
Signed-off-by: Wei Huang <wei@...hat.com>
Signed-off-by: Thomas Gleixner <tglx@...utronix.de>
Acked-by: Kirill A. Shutemov <kirill.shutemov@...ux.intel.com>
Cc: bp@...en8.de
Cc: hpa@...or.com
Link: https://lkml.kernel.org/r/20190104054411.12489-1-wei@redhat.com
Signed-off-by: Sasha Levin <sashal@...nel.org>
---
arch/x86/boot/compressed/head_64.S | 8 ++++++++
arch/x86/boot/compressed/pgtable.h | 2 +-
2 files changed, 9 insertions(+), 1 deletion(-)
diff --git a/arch/x86/boot/compressed/head_64.S b/arch/x86/boot/compressed/head_64.S
index 64037895b085..f105ae8651c9 100644
--- a/arch/x86/boot/compressed/head_64.S
+++ b/arch/x86/boot/compressed/head_64.S
@@ -600,6 +600,14 @@ ENTRY(trampoline_32bit_src)
leal TRAMPOLINE_32BIT_PGTABLE_OFFSET(%ecx), %eax
movl %eax, %cr3
3:
+ /* Set EFER.LME=1 as a precaution in case hypervsior pulls the rug */
+ pushl %ecx
+ movl $MSR_EFER, %ecx
+ rdmsr
+ btsl $_EFER_LME, %eax
+ wrmsr
+ popl %ecx
+
/* Enable PAE and LA57 (if required) paging modes */
movl $X86_CR4_PAE, %eax
cmpl $0, %edx
diff --git a/arch/x86/boot/compressed/pgtable.h b/arch/x86/boot/compressed/pgtable.h
index 91f75638f6e6..6ff7e81b5628 100644
--- a/arch/x86/boot/compressed/pgtable.h
+++ b/arch/x86/boot/compressed/pgtable.h
@@ -6,7 +6,7 @@
#define TRAMPOLINE_32BIT_PGTABLE_OFFSET 0
#define TRAMPOLINE_32BIT_CODE_OFFSET PAGE_SIZE
-#define TRAMPOLINE_32BIT_CODE_SIZE 0x60
+#define TRAMPOLINE_32BIT_CODE_SIZE 0x70
#define TRAMPOLINE_32BIT_STACK_END TRAMPOLINE_32BIT_SIZE
--
2.19.1
Powered by blists - more mailing lists