[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <d20abcde-2d54-45cb-b821-1ff1af5cbb86@linux.intel.com>
Date: Tue, 12 Mar 2024 08:39:57 +0800
From: Binbin Wu <binbin.wu@...ux.intel.com>
To: Paolo Bonzini <pbonzini@...hat.com>, michael.roth@....com
Cc: linux-kernel@...r.kernel.org, kvm@...r.kernel.org, seanjc@...gle.com,
isaku.yamahata@...el.com, thomas.lendacky@....com
Subject: Re: [PATCH 21/21] KVM: x86: Add gmem hook for determining max NPT
mapping level
On 2/28/2024 7:21 AM, Paolo Bonzini wrote:
> From: Michael Roth<michael.roth@....com>
>
> In the case of SEV-SNP, whether or not a 2MB page can be mapped via a
> 2MB mapping in the guest's nested page table depends on whether or not
> any subpages within the range have already been initialized as private
> in the RMP table. The existing mixed-attribute tracking in KVM is
> insufficient here, for instance:
>
> - gmem allocates 2MB page
> - guest issues PVALIDATE on 2MB page
> - guest later converts a subpage to shared
> - SNP host code issues PSMASH to split 2MB RMP mapping to 4K
> - KVM MMU splits NPT mapping to 4K
Is here a sentence missing that "guest converts the shared subpage back
to private"?
Otherwise, it conflicts with the following statement "there are no mixed
attributes".
> At this point there are no mixed attributes, and KVM would normally
> allow for 2MB NPT mappings again, but this is actually not allowed
> because the RMP table mappings are 4K and cannot be promoted on the
> hypervisor side, so the NPT mappings must still be limited to 4K to
> match this.
>
> Add a hook to determine the max NPT mapping size in situations like
> this.
>
> Signed-off-by: Michael Roth<michael.roth@....com>
> Message-Id:<20231230172351.574091-31-michael.roth@....com>
> Signed-off-by: Paolo Bonzini<pbonzini@...hat.com>
> ---
> arch/x86/include/asm/kvm-x86-ops.h | 1 +
> arch/x86/include/asm/kvm_host.h | 1 +
> arch/x86/kvm/mmu/mmu.c | 7 +++++++
> 3 files changed, 9 insertions(+)
>
> diff --git a/arch/x86/include/asm/kvm-x86-ops.h b/arch/x86/include/asm/kvm-x86-ops.h
> index 42474acb7375..436e3c157fae 100644
> --- a/arch/x86/include/asm/kvm-x86-ops.h
> +++ b/arch/x86/include/asm/kvm-x86-ops.h
> @@ -140,6 +140,7 @@ KVM_X86_OP(vcpu_deliver_sipi_vector)
> KVM_X86_OP_OPTIONAL_RET0(vcpu_get_apicv_inhibit_reasons);
> KVM_X86_OP_OPTIONAL(get_untagged_addr)
> KVM_X86_OP_OPTIONAL_RET0(gmem_prepare)
> +KVM_X86_OP_OPTIONAL_RET0(gmem_validate_fault)
> KVM_X86_OP_OPTIONAL(gmem_invalidate)
>
> #undef KVM_X86_OP
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index e523b204697d..259e6bb1e447 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -1806,6 +1806,7 @@ struct kvm_x86_ops {
> gva_t (*get_untagged_addr)(struct kvm_vcpu *vcpu, gva_t gva, unsigned int flags);
> int (*gmem_prepare)(struct kvm *kvm, kvm_pfn_t pfn, gfn_t gfn, int max_order);
> void (*gmem_invalidate)(kvm_pfn_t start, kvm_pfn_t end);
> + int (*gmem_validate_fault)(struct kvm *kvm, kvm_pfn_t pfn, gfn_t gfn, u8 *max_level);
> };
>
> struct kvm_x86_nested_ops {
> diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> index 6b4cb71668df..bcf12ac489f9 100644
> --- a/arch/x86/kvm/mmu/mmu.c
> +++ b/arch/x86/kvm/mmu/mmu.c
> @@ -4339,6 +4339,13 @@ static int kvm_faultin_pfn_private(struct kvm_vcpu *vcpu,
> fault->max_level);
> fault->map_writable = !(fault->slot->flags & KVM_MEM_READONLY);
>
> + r = static_call(kvm_x86_gmem_validate_fault)(vcpu->kvm, fault->pfn,
> + fault->gfn, &fault->max_level);
> + if (r) {
> + kvm_release_pfn_clean(fault->pfn);
> + return r;
> + }
> +
> return RET_PF_CONTINUE;
> }
>
Powered by blists - more mailing lists