lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Mon, 22 Apr 2024 16:46:27 +0800
From: Xiaoyao Li <xiaoyao.li@...el.com>
To: Paolo Bonzini <pbonzini@...hat.com>, linux-kernel@...r.kernel.org,
 kvm@...r.kernel.org
Cc: isaku.yamahata@...el.com, binbin.wu@...ux.intel.com, seanjc@...gle.com,
 rick.p.edgecombe@...el.com
Subject: Re: [PATCH 3/6] KVM: x86/mmu: Extract __kvm_mmu_do_page_fault()

On 4/19/2024 4:59 PM, Paolo Bonzini wrote:
> From: Isaku Yamahata <isaku.yamahata@...el.com>
> 
> Extract out __kvm_mmu_do_page_fault() from kvm_mmu_do_page_fault().  The
> inner function is to initialize struct kvm_page_fault and to call the fault
> handler, and the outer function handles updating stats and converting
> return code.  

I don't see how the outer function converts return code.

> KVM_PRE_FAULT_MEMORY will call the KVM page fault handler.

I assume it means the inner function will be used by KVM_PRE_FAULT_MEMORY.

> This patch makes the emulation_type always set irrelevant to the return
> code.  kvm_mmu_page_fault() is the only caller of kvm_mmu_do_page_fault(),
> and references the value only when PF_RET_EMULATE is returned.  Therefore,
> this adjustment doesn't affect functionality.

This paragraph needs to be removed, I think. It's not true.

> No functional change intended.
> 
> Suggested-by: Sean Christopherson <seanjc@...gle.com>
> Signed-off-by: Isaku Yamahata <isaku.yamahata@...el.com>
> Message-ID: <ddf1d98420f562707b11e12c416cce8fdb986bb1.1712785629.git.isaku.yamahata@...el.com>
> Signed-off-by: Paolo Bonzini <pbonzini@...hat.com>
> ---
>   arch/x86/kvm/mmu/mmu_internal.h | 38 +++++++++++++++++++++------------
>   1 file changed, 24 insertions(+), 14 deletions(-)
> 
> diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h
> index e68a60974cf4..9baae6c223ee 100644
> --- a/arch/x86/kvm/mmu/mmu_internal.h
> +++ b/arch/x86/kvm/mmu/mmu_internal.h
> @@ -287,8 +287,8 @@ static inline void kvm_mmu_prepare_memory_fault_exit(struct kvm_vcpu *vcpu,
>   				      fault->is_private);
>   }
>   
> -static inline int kvm_mmu_do_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa,
> -					u64 err, bool prefetch, int *emulation_type)
> +static inline int __kvm_mmu_do_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa,
> +					  u64 err, bool prefetch, int *emulation_type)
>   {
>   	struct kvm_page_fault fault = {
>   		.addr = cr2_or_gpa,
> @@ -318,6 +318,27 @@ static inline int kvm_mmu_do_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa,
>   		fault.slot = kvm_vcpu_gfn_to_memslot(vcpu, fault.gfn);
>   	}
>   
> +	if (IS_ENABLED(CONFIG_MITIGATION_RETPOLINE) && fault.is_tdp)
> +		r = kvm_tdp_page_fault(vcpu, &fault);
> +	else
> +		r = vcpu->arch.mmu->page_fault(vcpu, &fault);
> +
> +	if (r == RET_PF_EMULATE && fault.is_private) {
> +		kvm_mmu_prepare_memory_fault_exit(vcpu, &fault);
> +		r = -EFAULT;
> +	}
> +
> +	if (fault.write_fault_to_shadow_pgtable && emulation_type)
> +		*emulation_type |= EMULTYPE_WRITE_PF_TO_SP;
> +
> +	return r;
> +}
> +
> +static inline int kvm_mmu_do_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa,
> +					u64 err, bool prefetch, int *emulation_type)
> +{
> +	int r;
> +
>   	/*
>   	 * Async #PF "faults", a.k.a. prefetch faults, are not faults from the
>   	 * guest perspective and have already been counted at the time of the
> @@ -326,18 +347,7 @@ static inline int kvm_mmu_do_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa,
>   	if (!prefetch)
>   		vcpu->stat.pf_taken++;
>   
> -	if (IS_ENABLED(CONFIG_MITIGATION_RETPOLINE) && fault.is_tdp)
> -		r = kvm_tdp_page_fault(vcpu, &fault);
> -	else
> -		r = vcpu->arch.mmu->page_fault(vcpu, &fault);
> -
> -	if (r == RET_PF_EMULATE && fault.is_private) {
> -		kvm_mmu_prepare_memory_fault_exit(vcpu, &fault);
> -		return -EFAULT;
> -	}
> -
> -	if (fault.write_fault_to_shadow_pgtable && emulation_type)
> -		*emulation_type |= EMULTYPE_WRITE_PF_TO_SP;
> +	r = __kvm_mmu_do_page_fault(vcpu, cr2_or_gpa, err, prefetch, emulation_type);
>   
>   	/*
>   	 * Similar to above, prefetch faults aren't truly spurious, and the


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ