[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CABgObfYpjYZ-UY_Dh+-u-r-Gp2nBDiu0o5yScGrraCDj6wYcxw@mail.gmail.com>
Date: Wed, 17 Apr 2024 14:26:12 +0200
From: Paolo Bonzini <pbonzini@...hat.com>
To: isaku.yamahata@...el.com
Cc: kvm@...r.kernel.org, isaku.yamahata@...il.com,
linux-kernel@...r.kernel.org, Sean Christopherson <seanjc@...gle.com>,
Michael Roth <michael.roth@....com>, David Matlack <dmatlack@...gle.com>,
Federico Parola <federico.parola@...ito.it>, Kai Huang <kai.huang@...el.com>
Subject: Re: [PATCH v2 08/10] KVM: x86: Add a hook in kvm_arch_vcpu_map_memory()
On Thu, Apr 11, 2024 at 12:08 AM <isaku.yamahata@...el.com> wrote:
>
> From: Isaku Yamahata <isaku.yamahata@...el.com>
>
> Add a hook in kvm_arch_vcpu_map_memory() for KVM_MAP_MEMORY before calling
> kvm_mmu_map_page() to adjust the error code for a page fault. The hook can
> hold vendor-specific logic to make those adjustments and enforce the
> restrictions. SEV and TDX KVM will use the hook.
>
> In the case of SEV and TDX, they need to adjust the KVM page fault error
> code or refuse the operation due to their restriction. TDX requires that
> the guest memory population must be before finalizing the VM.
>
> Signed-off-by: Isaku Yamahata <isaku.yamahata@...el.com>
> ---
> v2:
> - Make pre_mmu_map_page() to take error_code.
> - Drop post_mmu_map_page().
> - Drop struct kvm_memory_map.source check.
> ---
> arch/x86/include/asm/kvm-x86-ops.h | 1 +
> arch/x86/include/asm/kvm_host.h | 3 +++
> arch/x86/kvm/x86.c | 28 ++++++++++++++++++++++++++++
> 3 files changed, 32 insertions(+)
>
> diff --git a/arch/x86/include/asm/kvm-x86-ops.h b/arch/x86/include/asm/kvm-x86-ops.h
> index 5187fcf4b610..a5d4f4d5265d 100644
> --- a/arch/x86/include/asm/kvm-x86-ops.h
> +++ b/arch/x86/include/asm/kvm-x86-ops.h
> @@ -139,6 +139,7 @@ KVM_X86_OP(vcpu_deliver_sipi_vector)
> KVM_X86_OP_OPTIONAL_RET0(vcpu_get_apicv_inhibit_reasons);
> KVM_X86_OP_OPTIONAL(get_untagged_addr)
> KVM_X86_OP_OPTIONAL(alloc_apic_backing_page)
> +KVM_X86_OP_OPTIONAL(pre_mmu_map_page);
>
> #undef KVM_X86_OP
> #undef KVM_X86_OP_OPTIONAL
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index 3ce244ad44e5..2bf7f97f889b 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -1812,6 +1812,9 @@ struct kvm_x86_ops {
>
> gva_t (*get_untagged_addr)(struct kvm_vcpu *vcpu, gva_t gva, unsigned int flags);
> void *(*alloc_apic_backing_page)(struct kvm_vcpu *vcpu);
> + int (*pre_mmu_map_page)(struct kvm_vcpu *vcpu,
> + struct kvm_memory_mapping *mapping,
> + u64 *error_code);
> };
>
> struct kvm_x86_nested_ops {
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index 8ba9c1720ac9..b76d854701d5 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -5868,6 +5868,26 @@ static int kvm_vcpu_ioctl_enable_cap(struct kvm_vcpu *vcpu,
> }
> }
>
> +static int kvm_pre_mmu_map_page(struct kvm_vcpu *vcpu,
> + struct kvm_memory_mapping *mapping,
> + u64 *error_code)
> +{
> + int r = 0;
> +
> + if (vcpu->kvm->arch.vm_type == KVM_X86_DEFAULT_VM) {
> + /* nothing */
> + } else if (vcpu->kvm->arch.vm_type == KVM_X86_SW_PROTECTED_VM) {
> + if (kvm_mem_is_private(vcpu->kvm, gpa_to_gfn(mapping->base_address)))
> + *error_code |= PFERR_PRIVATE_ACCESS;
This can probably be done for all VM types, not just KVM_X86_SW_PROTECTED_VM.
For now I am going to squash
if (kvm_arch_has_private_mem(vcpu->kvm) &&
kvm_mem_is_private(vcpu->kvm, gpa_to_gfn(mapping->base_address)))
*error_code |= PFERR_GUEST_ENC_MASK;
in the previous patch. If TDX or SEV need to adjust, they can
introduce the hook where we know if/how it is used.
Paolo
Powered by blists - more mailing lists