[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20201020061859.18385-8-kirill.shutemov@linux.intel.com>
Date: Tue, 20 Oct 2020 09:18:50 +0300
From: "Kirill A. Shutemov" <kirill@...temov.name>
To: Dave Hansen <dave.hansen@...ux.intel.com>,
Andy Lutomirski <luto@...nel.org>,
Peter Zijlstra <peterz@...radead.org>,
Paolo Bonzini <pbonzini@...hat.com>,
Sean Christopherson <sean.j.christopherson@...el.com>,
Vitaly Kuznetsov <vkuznets@...hat.com>,
Wanpeng Li <wanpengli@...cent.com>,
Jim Mattson <jmattson@...gle.com>,
Joerg Roedel <joro@...tes.org>
Cc: David Rientjes <rientjes@...gle.com>,
Andrea Arcangeli <aarcange@...hat.com>,
Kees Cook <keescook@...omium.org>,
Will Drewry <wad@...omium.org>,
"Edgecombe, Rick P" <rick.p.edgecombe@...el.com>,
"Kleen, Andi" <andi.kleen@...el.com>,
Liran Alon <liran.alon@...cle.com>,
Mike Rapoport <rppt@...nel.org>, x86@...nel.org,
kvm@...r.kernel.org, linux-mm@...ck.org,
linux-kernel@...r.kernel.org,
"Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>
Subject: [RFCv2 07/16] x86/realmode: Share trampoline area if KVM memory protection enabled
If KVM memory protection is active, the trampoline area will need to be
in shared memory.
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@...ux.intel.com>
---
arch/x86/realmode/init.c | 7 ++++---
1 file changed, 4 insertions(+), 3 deletions(-)
diff --git a/arch/x86/realmode/init.c b/arch/x86/realmode/init.c
index 1ed1208931e0..7392940a7f96 100644
--- a/arch/x86/realmode/init.c
+++ b/arch/x86/realmode/init.c
@@ -9,6 +9,7 @@
#include <asm/realmode.h>
#include <asm/tlbflush.h>
#include <asm/crash.h>
+#include <asm/kvm_para.h>
struct real_mode_header *real_mode_header;
u32 *trampoline_cr4_features;
@@ -55,11 +56,11 @@ static void __init setup_real_mode(void)
base = (unsigned char *)real_mode_header;
/*
- * If SME is active, the trampoline area will need to be in
- * decrypted memory in order to bring up other processors
+ * If SME or KVM memory protection is active, the trampoline area will
+ * need to be in decrypted memory in order to bring up other processors
* successfully. This is not needed for SEV.
*/
- if (sme_active())
+ if (sme_active() || kvm_mem_protected())
set_memory_decrypted((unsigned long)base, size >> PAGE_SHIFT);
memcpy(base, real_mode_blob, size);
--
2.26.2
Powered by blists - more mailing lists