[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <08418fc0-839a-f2fb-1c2e-b4f077d2647b@amd.com>
Date: Mon, 21 Aug 2023 08:10:05 -0500
From: Tom Lendacky <thomas.lendacky@....com>
To: Steve Rutherford <srutherford@...gle.com>,
Borislav Petkov <bp@...en8.de>,
Thomas Gleixner <tglx@...utronix.de>
Cc: Paolo Bonzini <pbonzini@...hat.com>,
Wanpeng Li <wanpengli@...cent.com>,
Vitaly Kuznetsov <vkuznets@...hat.com>,
Ingo Molnar <mingo@...hat.com>,
Dave Hansen <dave.hansen@...ux.intel.com>, x86@...nel.org,
"H . Peter Anvin" <hpa@...or.com>, kvm@...r.kernel.org,
linux-kernel@...r.kernel.org, David.Kaplan@....com,
jacobhxu@...gle.com, patelsvishal@...gle.com, bhillier@...gle.com
Subject: Re: [PATCH] x86/sev: Make early_set_memory_decrypted() calls page
aligned
On 8/18/23 18:34, Steve Rutherford wrote:
> early_set_memory_decrypted() assumes its parameters are page aligned.
> Non-page aligned calls result in additional pages being marked as
> decrypted via the encryption status hypercall, which results in
> consistent corruption of pages during live migration. Live
> migration requires accurate encryption status information to avoid
> migrating pages from the wrong perspective.
Hmmm... I'm not sure this is the proper fix. The code is actually doing
the right thing from a encyrption/decryption point of view by checking the
c-bit for the PTE associated with the virtual address and the size
(possibly crossing page boundaries).
I think the problem is on the call to early_set_mem_enc_dec_hypercall()
where it doesn't take into account the possible crossing of page
boundaries and so can under-count the number of pages, right?
Thanks,
Tom
>
> Fixes: 4716276184ec ("X86/KVM: Decrypt shared per-cpu variables when SEV is active")
> Signed-off-by: Steve Rutherford <srutherford@...gle.com>
> ---
> arch/x86/kernel/kvm.c | 14 +++++++++++++-
> 1 file changed, 13 insertions(+), 1 deletion(-)
>
> diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
> index 6a36db4f79fd..a0c072d3103c 100644
> --- a/arch/x86/kernel/kvm.c
> +++ b/arch/x86/kernel/kvm.c
> @@ -419,7 +419,14 @@ static u64 kvm_steal_clock(int cpu)
>
> static inline void __set_percpu_decrypted(void *ptr, unsigned long size)
> {
> - early_set_memory_decrypted((unsigned long) ptr, size);
> + /*
> + * early_set_memory_decrypted() requires page aligned parameters, but
> + * this function needs to handle ptrs offset into a page.
> + */
> + unsigned long start = PAGE_ALIGN_DOWN((unsigned long) ptr);
> + unsigned long end = (unsigned long) ptr + size;
> +
> + early_set_memory_decrypted(start, end - start);
> }
>
> /*
> @@ -438,6 +445,11 @@ static void __init sev_map_percpu_data(void)
> return;
>
> for_each_possible_cpu(cpu) {
> + /*
> + * Calling __set_percpu_decrypted() for each per-cpu variable is
> + * inefficent, since it may decrypt the same page multiple times.
> + * That said, it avoids the need for more complicated logic.
> + */
> __set_percpu_decrypted(&per_cpu(apf_reason, cpu), sizeof(apf_reason));
> __set_percpu_decrypted(&per_cpu(steal_time, cpu), sizeof(steal_time));
> __set_percpu_decrypted(&per_cpu(kvm_apic_eoi, cpu), sizeof(kvm_apic_eoi));
Powered by blists - more mailing lists