[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZekKwlLdf6vm5e5u@google.com>
Date: Wed, 6 Mar 2024 16:30:58 -0800
From: David Matlack <dmatlack@...gle.com>
To: isaku.yamahata@...el.com
Cc: kvm@...r.kernel.org, isaku.yamahata@...il.com,
linux-kernel@...r.kernel.org,
Sean Christopherson <seanjc@...gle.com>,
Paolo Bonzini <pbonzini@...hat.com>,
Michael Roth <michael.roth@....com>,
Federico Parola <federico.parola@...ito.it>
Subject: Re: [RFC PATCH 6/8] KVM: x86: Implement kvm_arch_{,
pre_}vcpu_map_memory()
On 2024-03-01 09:28 AM, isaku.yamahata@...el.com wrote:
> From: Isaku Yamahata <isaku.yamahata@...el.com>
>
> Wire KVM_MAP_MEMORY ioctl to kvm_mmu_map_tdp_page() to populate guest
> memory.
>
> Signed-off-by: Isaku Yamahata <isaku.yamahata@...el.com>
> ---
> arch/x86/kvm/x86.c | 49 ++++++++++++++++++++++++++++++++++++++++++++++
> 1 file changed, 49 insertions(+)
>
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index 3b8cb69b04fa..6025c0e12d89 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -4660,6 +4660,7 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
> case KVM_CAP_VM_DISABLE_NX_HUGE_PAGES:
> case KVM_CAP_IRQFD_RESAMPLE:
> case KVM_CAP_MEMORY_FAULT_INFO:
> + case KVM_CAP_MAP_MEMORY:
> r = 1;
> break;
> case KVM_CAP_EXIT_HYPERCALL:
> @@ -5805,6 +5806,54 @@ static int kvm_vcpu_ioctl_enable_cap(struct kvm_vcpu *vcpu,
> }
> }
>
> +int kvm_arch_vcpu_pre_map_memory(struct kvm_vcpu *vcpu)
> +{
> + return kvm_mmu_reload(vcpu);
> +}
Why is the here and not kvm_arch_vcpu_map_memory()?
> +
> +int kvm_arch_vcpu_map_memory(struct kvm_vcpu *vcpu,
> + struct kvm_memory_mapping *mapping)
> +{
> + u8 max_level, goal_level = PG_LEVEL_4K;
> + u32 error_code;
> + int r;
> +
> + error_code = 0;
> + if (mapping->flags & KVM_MEMORY_MAPPING_FLAG_WRITE)
> + error_code |= PFERR_WRITE_MASK;
> + if (mapping->flags & KVM_MEMORY_MAPPING_FLAG_EXEC)
> + error_code |= PFERR_FETCH_MASK;
> + if (mapping->flags & KVM_MEMORY_MAPPING_FLAG_USER)
> + error_code |= PFERR_USER_MASK;
> + if (mapping->flags & KVM_MEMORY_MAPPING_FLAG_PRIVATE) {
> +#ifdef PFERR_PRIVATE_ACCESS
> + error_code |= PFERR_PRIVATE_ACCESS;
> +#else
> + return -OPNOTSUPP;
-EOPNOTSUPP
> +#endif
> + }
> +
> + if (IS_ALIGNED(mapping->base_gfn, KVM_PAGES_PER_HPAGE(PG_LEVEL_1G)) &&
> + mapping->nr_pages >= KVM_PAGES_PER_HPAGE(PG_LEVEL_1G))
> + max_level = PG_LEVEL_1G;
> + else if (IS_ALIGNED(mapping->base_gfn, KVM_PAGES_PER_HPAGE(PG_LEVEL_2M)) &&
> + mapping->nr_pages >= KVM_PAGES_PER_HPAGE(PG_LEVEL_2M))
> + max_level = PG_LEVEL_2M;
> + else
> + max_level = PG_LEVEL_4K;
Is there a requirement that KVM must not map memory outside of the
requested region?
> +
> + r = kvm_mmu_map_page(vcpu, gfn_to_gpa(mapping->base_gfn), error_code,
> + max_level, &goal_level);
> + if (r)
> + return r;
> +
> + if (mapping->source)
> + mapping->source += KVM_HPAGE_SIZE(goal_level);
> + mapping->base_gfn += KVM_PAGES_PER_HPAGE(goal_level);
> + mapping->nr_pages -= KVM_PAGES_PER_HPAGE(goal_level);
> + return r;
> +}
> +
> long kvm_arch_vcpu_ioctl(struct file *filp,
> unsigned int ioctl, unsigned long arg)
> {
> --
> 2.25.1
>
Powered by blists - more mailing lists