lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <621c260399a05338ba6d034e275e19714ad3665c.camel@intel.com>
Date: Tue, 16 Apr 2024 14:36:31 +0000
From: "Edgecombe, Rick P" <rick.p.edgecombe@...el.com>
To: "kvm@...r.kernel.org" <kvm@...r.kernel.org>, "Yamahata, Isaku"
	<isaku.yamahata@...el.com>
CC: "seanjc@...gle.com" <seanjc@...gle.com>, "Huang, Kai"
	<kai.huang@...el.com>, "federico.parola@...ito.it"
	<federico.parola@...ito.it>, "linux-kernel@...r.kernel.org"
	<linux-kernel@...r.kernel.org>, "isaku.yamahata@...il.com"
	<isaku.yamahata@...il.com>, "dmatlack@...gle.com" <dmatlack@...gle.com>,
	"michael.roth@....com" <michael.roth@....com>, "pbonzini@...hat.com"
	<pbonzini@...hat.com>
Subject: Re: [PATCH v2 03/10] KVM: x86/mmu: Extract __kvm_mmu_do_page_fault()

On Wed, 2024-04-10 at 15:07 -0700, isaku.yamahata@...el.com wrote:
> From: Isaku Yamahata <isaku.yamahata@...el.com>
> 
> Extract out __kvm_mmu_do_page_fault() from kvm_mmu_do_page_fault().  The
> inner function is to initialize struct kvm_page_fault and to call the fault
> handler, and the outer function handles updating stats and converting
> return code.  KVM_MAP_MEMORY will call the KVM page fault handler.
> 
> This patch makes the emulation_type always set irrelevant to the return
           a comma would help parse this better ^
> code.

>   kvm_mmu_page_fault() is the only caller of kvm_mmu_do_page_fault(),

Not technically correct, there are other callers that pass NULL for
emulation_type.

> and references the value only when PF_RET_EMULATE is returned.  Therefore,
> this adjustment doesn't affect functionality.

Is there a problem with dropping the argument then?

> 
> No functional change intended.

Can we not use the "intended"? It sounds like hedging for excuses.

> 
> Suggested-by: Sean Christopherson <seanjc@...gle.com>
> Signed-off-by: Isaku Yamahata <isaku.yamahata@...el.com>
> ---
> v2:
> - Newly introduced. (Sean)
> ---
>  arch/x86/kvm/mmu/mmu_internal.h | 32 +++++++++++++++++++++-----------
>  1 file changed, 21 insertions(+), 11 deletions(-)
> 
> diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h
> index e68a60974cf4..9baae6c223ee 100644
> --- a/arch/x86/kvm/mmu/mmu_internal.h
> +++ b/arch/x86/kvm/mmu/mmu_internal.h
> @@ -287,8 +287,8 @@ static inline void
> kvm_mmu_prepare_memory_fault_exit(struct kvm_vcpu *vcpu,
>                                       fault->is_private);
>  }
>  
> -static inline int kvm_mmu_do_page_fault(struct kvm_vcpu *vcpu, gpa_t
> cr2_or_gpa,
> -                                       u64 err, bool prefetch, int
> *emulation_type)
> +static inline int __kvm_mmu_do_page_fault(struct kvm_vcpu *vcpu, gpa_t
> cr2_or_gpa,
> +                                         u64 err, bool prefetch, int
> *emulation_type)
>  {
>         struct kvm_page_fault fault = {
>                 .addr = cr2_or_gpa,
> @@ -318,14 +318,6 @@ static inline int kvm_mmu_do_page_fault(struct kvm_vcpu
> *vcpu, gpa_t cr2_or_gpa,
>                 fault.slot = kvm_vcpu_gfn_to_memslot(vcpu, fault.gfn);
>         }
>  
> -       /*
> -        * Async #PF "faults", a.k.a. prefetch faults, are not faults from the
> -        * guest perspective and have already been counted at the time of the
> -        * original fault.
> -        */
> -       if (!prefetch)
> -               vcpu->stat.pf_taken++;
> -
>         if (IS_ENABLED(CONFIG_MITIGATION_RETPOLINE) && fault.is_tdp)
>                 r = kvm_tdp_page_fault(vcpu, &fault);
>         else
> @@ -333,12 +325,30 @@ static inline int kvm_mmu_do_page_fault(struct kvm_vcpu
> *vcpu, gpa_t cr2_or_gpa,
>  
>         if (r == RET_PF_EMULATE && fault.is_private) {
>                 kvm_mmu_prepare_memory_fault_exit(vcpu, &fault);
> -               return -EFAULT;
> +               r = -EFAULT;
>         }
>  
>         if (fault.write_fault_to_shadow_pgtable && emulation_type)
>                 *emulation_type |= EMULTYPE_WRITE_PF_TO_SP;
>  
> +       return r;
> +}
> +
> +static inline int kvm_mmu_do_page_fault(struct kvm_vcpu *vcpu, gpa_t
> cr2_or_gpa,
> +                                       u64 err, bool prefetch, int
> *emulation_type)
> +{
> +       int r;
> +
> +       /*
> +        * Async #PF "faults", a.k.a. prefetch faults, are not faults from the
> +        * guest perspective and have already been counted at the time of the
> +        * original fault.
> +        */
> +       if (!prefetch)
> +               vcpu->stat.pf_taken++;

From the name, it makes sense to not count KVM_MAP_MEMORY as a pf_taken. But
kvm_arch_async_page_ready() increments it as well. Which makes it more like a
"faulted-in" count. I think the code in this patch is ok.

> +
> +       r = __kvm_mmu_do_page_fault(vcpu, cr2_or_gpa, err, prefetch,
> emulation_type);
> +
>         /*
>          * Similar to above, prefetch faults aren't truly spurious, and the
>          * async #PF path doesn't do emulation.  Do count faults that are
> fixed

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ