lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 15 Mar 2019 18:32:01 +0800
From:   Lianbo Jiang <lijiang@...hat.com>
To:     linux-kernel@...r.kernel.org
Cc:     kexec@...ts.infradead.org, tglx@...utronix.de, mingo@...hat.com,
        bp@...en8.de, x86@...nel.org, hpa@...or.com,
        akpm@...ux-foundation.org, dyoung@...hat.com,
        brijesh.singh@....com, thomas.lendacky@....com, bhe@...hat.com
Subject: [PATCH 1/3] kexec: Do not map the kexec area as decrypted when SEV is active

Currently, the arch_kexec_post_{alloc,free}_pages unconditionally
maps the kexec area as decrypted. This works fine when SME is active.
Because in SME, the first kernel is loaded in decrypted area by the
BIOS, so the second kernel must be also loaded into the decrypted
memory.

When SEV is active, the first kernel is loaded into the encrypted
area, so the second kernel must be also loaded into the encrypted
memory. Lets make sure that arch_kexec_post_{alloc,free}_pages does
not clear the memory encryption mask from the kexec area when SEV
is active.

Co-developed-by: Brijesh Singh <brijesh.singh@....com>
Signed-off-by: Brijesh Singh <brijesh.singh@....com>
Signed-off-by: Lianbo Jiang <lijiang@...hat.com>
---
 arch/x86/kernel/machine_kexec_64.c | 8 ++++++--
 1 file changed, 6 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kernel/machine_kexec_64.c b/arch/x86/kernel/machine_kexec_64.c
index ceba408ea982..bcebf4993da4 100644
--- a/arch/x86/kernel/machine_kexec_64.c
+++ b/arch/x86/kernel/machine_kexec_64.c
@@ -566,7 +566,10 @@ int arch_kexec_post_alloc_pages(void *vaddr, unsigned int pages, gfp_t gfp)
 	 * not encrypted because when we boot to the new kernel the
 	 * pages won't be accessed encrypted (initially).
 	 */
-	return set_memory_decrypted((unsigned long)vaddr, pages);
+	if (sme_active())
+		return set_memory_decrypted((unsigned long)vaddr, pages);
+
+	return 0;
 }
 
 void arch_kexec_pre_free_pages(void *vaddr, unsigned int pages)
@@ -575,5 +578,6 @@ void arch_kexec_pre_free_pages(void *vaddr, unsigned int pages)
 	 * If SME is active we need to reset the pages back to being
 	 * an encrypted mapping before freeing them.
 	 */
-	set_memory_encrypted((unsigned long)vaddr, pages);
+	if (sme_active())
+		set_memory_encrypted((unsigned long)vaddr, pages);
 }
-- 
2.17.1

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ