lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Wed, 7 Sep 2022 10:58:39 -0700
From:   David Matlack <dmatlack@...gle.com>
To:     Hou Wenlong <houwenlong.hwl@...group.com>
Cc:     kvm@...r.kernel.org, Sean Christopherson <seanjc@...gle.com>,
        Paolo Bonzini <pbonzini@...hat.com>,
        Thomas Gleixner <tglx@...utronix.de>,
        Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
        Dave Hansen <dave.hansen@...ux.intel.com>, x86@...nel.org,
        "H. Peter Anvin" <hpa@...or.com>, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2 3/6] KVM: x86/mmu: Reduce gfn range of tlb flushing in
 tdp_mmu_map_handle_target_level()

On Wed, Aug 24, 2022 at 05:29:20PM +0800, Hou Wenlong wrote:
> Since the children SP is zapped, the gfn range of tlb flushing should be
> the range covered by children SP not parent SP. Replace sp->gfn which is
> the base gfn of parent SP with iter->gfn and use the correct size of
> gfn range for children SP to reduce tlb flushing range.
> 

Fixes: bb95dfb9e2df ("KVM: x86/mmu: Defer TLB flush to caller when freeing TDP MMU shadow pages")

> Signed-off-by: Hou Wenlong <houwenlong.hwl@...group.com>

Reviewed-by: David Matlack <dmatlack@...gle.com>

> ---
>  arch/x86/kvm/mmu/tdp_mmu.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
> index bf2ccf9debca..08b7932122ec 100644
> --- a/arch/x86/kvm/mmu/tdp_mmu.c
> +++ b/arch/x86/kvm/mmu/tdp_mmu.c
> @@ -1071,8 +1071,8 @@ static int tdp_mmu_map_handle_target_level(struct kvm_vcpu *vcpu,
>  		return RET_PF_RETRY;
>  	else if (is_shadow_present_pte(iter->old_spte) &&
>  		 !is_last_spte(iter->old_spte, iter->level))
> -		kvm_flush_remote_tlbs_with_address(vcpu->kvm, sp->gfn,
> -						   KVM_PAGES_PER_HPAGE(iter->level + 1));
> +		kvm_flush_remote_tlbs_with_address(vcpu->kvm, iter->gfn,
> +						   KVM_PAGES_PER_HPAGE(iter->level));
>  
>  	/*
>  	 * If the page fault was caused by a write but the page is write
> -- 
> 2.31.1
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ