[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CABgObfZfyWzKRafPVcTyQ23oO=aAkc7Pmg8En4412J0vx1WotQ@mail.gmail.com>
Date: Thu, 11 Jul 2024 10:30:41 +0200
From: Paolo Bonzini <pbonzini@...hat.com>
To: Binbin Wu <binbin.wu@...ux.intel.com>
Cc: linux-kernel@...r.kernel.org, kvm@...r.kernel.org,
isaku.yamahata@...el.com, seanjc@...gle.com, xiaoyao.li@...el.com
Subject: Re: [PATCH v5 6/7] KVM: x86: Implement kvm_arch_vcpu_pre_fault_memory()
On Thu, Jul 11, 2024 at 7:37 AM Binbin Wu <binbin.wu@...ux.intel.com> wrote:
> On 7/11/2024 1:40 AM, Paolo Bonzini wrote:
> > Wire KVM_PRE_FAULT_MEMORY ioctl to __kvm_mmu_do_page_fault() to populate guest
>
> __kvm_mmu_do_page_fault() -> kvm_mmu_do_page_fault()
>
> > memory. It can be called right after KVM_CREATE_VCPU creates a vCPU,
> > since at that point kvm_mmu_create() and kvm_init_mmu() are called and
> > the vCPU is ready to invoke the KVM page fault handler.
> >
> > The helper function kvm_mmu_map_tdp_page take care of the logic to
>
> kvm_mmu_map_tdp_page -> kvm_tdp_map_page()?
Yes, will fix.
> > diff --git a/arch/x86/kvm/Kconfig b/arch/x86/kvm/Kconfig
> > index 80e5afde69f4..4287a8071a3a 100644
> > --- a/arch/x86/kvm/Kconfig
> > +++ b/arch/x86/kvm/Kconfig
> > @@ -44,6 +44,7 @@ config KVM
> > select KVM_VFIO
> > select HAVE_KVM_PM_NOTIFIER if PM
> > select KVM_GENERIC_HARDWARE_ENABLING
> > + select KVM_GENERIC_PRE_FAULT_MEMORY
> > select KVM_WERROR if WERROR
> > help
> > Support hosting fully virtualized guest machines using hardware
> [...]
> > index ba0ad76f53bc..a6968eadd418 100644
> > --- a/arch/x86/kvm/x86.c
> > +++ b/arch/x86/kvm/x86.c
> > @@ -4705,6 +4705,9 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
> > case KVM_CAP_MEMORY_FAULT_INFO:
> > r = 1;
> > break;
> > + case KVM_CAP_PRE_FAULT_MEMORY:
> > + r = tdp_enabled;
> > + break;
> If !CONFIG_KVM_GENERIC_PRE_FAULT_MEMORY, this should return 0.
This is x86-specific code and it CONFIG_KVM_GENERIC_PRE_FAULT_MEMORY
is always selected by CONFIG_KVM on x86 (that is, it does not depend
on TDX or anything else).
Paolo
Powered by blists - more mailing lists