[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240417192837.GI3039520@ls.amr.corp.intel.com>
Date: Wed, 17 Apr 2024 12:28:37 -0700
From: Isaku Yamahata <isaku.yamahata@...el.com>
To: Paolo Bonzini <pbonzini@...hat.com>
Cc: linux-kernel@...r.kernel.org, kvm@...r.kernel.org,
isaku.yamahata@...el.com, xiaoyao.li@...el.com,
binbin.wu@...ux.intel.com, seanjc@...gle.com,
rick.p.edgecombe@...el.com, isaku.yamahata@...ux.intel.com
Subject: Re: [PATCH 6/7] KVM: x86: Implement kvm_arch_vcpu_map_memory()
On Wed, Apr 17, 2024 at 11:34:49AM -0400,
Paolo Bonzini <pbonzini@...hat.com> wrote:
> From: Isaku Yamahata <isaku.yamahata@...el.com>
>
> Wire KVM_MAP_MEMORY ioctl to kvm_mmu_map_tdp_page() to populate guest
> memory. When KVM_CREATE_VCPU creates vCPU, it initializes the x86
> KVM MMU part by kvm_mmu_create() and kvm_init_mmu(). vCPU is ready to
> invoke the KVM page fault handler.
As a record for the past discussion and to address Rick comment at
https://lore.kernel.org/all/75b213fd73fcb5872703f89a9c6bb67ea91e3bd7.camel@intel.com/
The current implementation supports TDP only because the population with GVA
is moot based on the thread [1]. If necessary, this restriction can be
relaxed in future.
[1] https://lore.kernel.org/all/116179545fafbf39ed01e1f0f5ac76e0467fc09a.camel@intel.com/
>
> Signed-off-by: Isaku Yamahata <isaku.yamahata@...el.com>
> Message-ID: <7138a3bc00ea8d3cbe0e59df15f8c22027005b59.1712785629.git.isaku.yamahata@...el.com>
> Signed-off-by: Paolo Bonzini <pbonzini@...hat.com>
> ---
> arch/x86/kvm/Kconfig | 1 +
> arch/x86/kvm/x86.c | 43 +++++++++++++++++++++++++++++++++++++++++++
> 2 files changed, 44 insertions(+)
>
> diff --git a/arch/x86/kvm/Kconfig b/arch/x86/kvm/Kconfig
> index 7632fe6e4db9..e58360d368ec 100644
> --- a/arch/x86/kvm/Kconfig
> +++ b/arch/x86/kvm/Kconfig
> @@ -44,6 +44,7 @@ config KVM
> select KVM_VFIO
> select HAVE_KVM_PM_NOTIFIER if PM
> select KVM_GENERIC_HARDWARE_ENABLING
> + select KVM_GENERIC_MAP_MEMORY
> help
> Support hosting fully virtualized guest machines using hardware
> virtualization extensions. You will need a fairly recent
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index 83b8260443a3..f84c75c2a47f 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -4715,6 +4715,9 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
> case KVM_CAP_MEMORY_FAULT_INFO:
> r = 1;
> break;
> + case KVM_CAP_MAP_MEMORY:
> + r = tdp_enabled;
> + break;
> case KVM_CAP_EXIT_HYPERCALL:
> r = KVM_EXIT_HYPERCALL_VALID_MASK;
> break;
> @@ -5867,6 +5870,46 @@ static int kvm_vcpu_ioctl_enable_cap(struct kvm_vcpu *vcpu,
> }
> }
>
> +int kvm_arch_vcpu_map_memory(struct kvm_vcpu *vcpu,
> + struct kvm_map_memory *mapping)
> +{
> + u64 mapped, end, error_code = 0;
> + u8 level = PG_LEVEL_4K;
> + int r;
> +
> + /*
> + * Shadow paging uses GVA for kvm page fault. The first implementation
> + * supports GPA only to avoid confusion.
> + */
> + if (!tdp_enabled)
> + return -EOPNOTSUPP;
> +
> + /*
> + * reload is efficient when called repeatedly, so we can do it on
> + * every iteration.
> + */
> + kvm_mmu_reload(vcpu);
> +
> + if (kvm_arch_has_private_mem(vcpu->kvm) &&
> + kvm_mem_is_private(vcpu->kvm, gpa_to_gfn(mapping->base_address)))
> + error_code |= PFERR_PRIVATE_ACCESS;
> +
> + r = kvm_tdp_map_page(vcpu, mapping->base_address, error_code, &level);
> + if (r)
> + return r;
> +
> + /*
> + * level can be more than the alignment of mapping->base_address if
> + * the mapping can use a huge page.
> + */
> + end = (mapping->base_address & KVM_HPAGE_MASK(level)) +
> + KVM_HPAGE_SIZE(level);
end = ALIGN(mapping->base_address, KVM_HPAGE_SIZE(level));
ALIGN() simplifies this as Chao pointed out.
https://lore.kernel.org/all/Zh94V8ochIXEkO17@chao-email/
> + mapped = min(mapping->size, end - mapping->base_address);
> + mapping->size -= mapped;
> + mapping->base_address += mapped;
> + return r;
> +}
> +
> long kvm_arch_vcpu_ioctl(struct file *filp,
> unsigned int ioctl, unsigned long arg)
> {
> --
> 2.43.0
>
>
>
--
Isaku Yamahata <isaku.yamahata@...el.com>
Powered by blists - more mailing lists