lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230818233451.3615464-1-srutherford@google.com>
Date:   Fri, 18 Aug 2023 16:34:51 -0700
From:   Steve Rutherford <srutherford@...gle.com>
To:     Borislav Petkov <bp@...en8.de>,
        Thomas Gleixner <tglx@...utronix.de>
Cc:     Paolo Bonzini <pbonzini@...hat.com>,
        Wanpeng Li <wanpengli@...cent.com>,
        Vitaly Kuznetsov <vkuznets@...hat.com>,
        Ingo Molnar <mingo@...hat.com>,
        Dave Hansen <dave.hansen@...ux.intel.com>, x86@...nel.org,
        "H . Peter Anvin" <hpa@...or.com>, kvm@...r.kernel.org,
        linux-kernel@...r.kernel.org, David.Kaplan@....com,
        jacobhxu@...gle.com, patelsvishal@...gle.com, bhillier@...gle.com,
        Steve Rutherford <srutherford@...gle.com>
Subject: [PATCH] x86/sev: Make early_set_memory_decrypted() calls page aligned

early_set_memory_decrypted() assumes its parameters are page aligned.
Non-page aligned calls result in additional pages being marked as
decrypted via the encryption status hypercall, which results in
consistent corruption of pages during live migration. Live
migration requires accurate encryption status information to avoid
migrating pages from the wrong perspective.

Fixes: 4716276184ec ("X86/KVM: Decrypt shared per-cpu variables when SEV is active")
Signed-off-by: Steve Rutherford <srutherford@...gle.com>
---
 arch/x86/kernel/kvm.c | 14 +++++++++++++-
 1 file changed, 13 insertions(+), 1 deletion(-)

diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
index 6a36db4f79fd..a0c072d3103c 100644
--- a/arch/x86/kernel/kvm.c
+++ b/arch/x86/kernel/kvm.c
@@ -419,7 +419,14 @@ static u64 kvm_steal_clock(int cpu)
 
 static inline void __set_percpu_decrypted(void *ptr, unsigned long size)
 {
-	early_set_memory_decrypted((unsigned long) ptr, size);
+	/*
+	 * early_set_memory_decrypted() requires page aligned parameters, but
+	 * this function needs to handle ptrs offset into a page.
+	 */
+	unsigned long start = PAGE_ALIGN_DOWN((unsigned long) ptr);
+	unsigned long end = (unsigned long) ptr + size;
+
+	early_set_memory_decrypted(start, end - start);
 }
 
 /*
@@ -438,6 +445,11 @@ static void __init sev_map_percpu_data(void)
 		return;
 
 	for_each_possible_cpu(cpu) {
+		/*
+		 * Calling __set_percpu_decrypted() for each per-cpu variable is
+		 * inefficent, since it may decrypt the same page multiple times.
+		 * That said, it avoids the need for more complicated logic.
+		 */
 		__set_percpu_decrypted(&per_cpu(apf_reason, cpu), sizeof(apf_reason));
 		__set_percpu_decrypted(&per_cpu(steal_time, cpu), sizeof(steal_time));
 		__set_percpu_decrypted(&per_cpu(kvm_apic_eoi, cpu), sizeof(kvm_apic_eoi));
-- 
2.42.0.rc1.204.g551eb34607-goog

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ