lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20131112002559.GA8251@amt.cnet>
Date:	Mon, 11 Nov 2013 22:25:59 -0200
From:	Marcelo Tosatti <mtosatti@...hat.com>
To:	Xiao Guangrong <xiaoguangrong@...ux.vnet.ibm.com>
Cc:	gleb@...hat.com, avi.kivity@...il.com, pbonzini@...hat.com,
	linux-kernel@...r.kernel.org, kvm@...r.kernel.org
Subject: Re: [PATCH v3 01/15] KVM: MMU: properly check last spte in
 fast_page_fault()

On Wed, Oct 23, 2013 at 09:29:19PM +0800, Xiao Guangrong wrote:
> Using sp->role.level instead of @level since @level is not got from the
> page table hierarchy
> 
> There is no issue in current code since the fast page fault currently only
> fixes the fault caused by dirty-log that is always on the last level
> (level = 1)
> 
> This patch makes the code more readable and avoids potential issue in the
> further development
> 
> Signed-off-by: Xiao Guangrong <xiaoguangrong@...ux.vnet.ibm.com>
> ---
>  arch/x86/kvm/mmu.c | 10 ++++++----
>  1 file changed, 6 insertions(+), 4 deletions(-)
> 
> diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
> index 40772ef..d2aacc2 100644
> --- a/arch/x86/kvm/mmu.c
> +++ b/arch/x86/kvm/mmu.c
> @@ -2798,9 +2798,9 @@ static bool page_fault_can_be_fast(u32 error_code)
>  }
>  
>  static bool
> -fast_pf_fix_direct_spte(struct kvm_vcpu *vcpu, u64 *sptep, u64 spte)
> +fast_pf_fix_direct_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp,
> +			u64 *sptep, u64 spte)
>  {
> -	struct kvm_mmu_page *sp = page_header(__pa(sptep));
>  	gfn_t gfn;
>  
>  	WARN_ON(!sp->role.direct);
> @@ -2826,6 +2826,7 @@ static bool fast_page_fault(struct kvm_vcpu *vcpu, gva_t gva, int level,
>  			    u32 error_code)
>  {
>  	struct kvm_shadow_walk_iterator iterator;
> +	struct kvm_mmu_page *sp;
>  	bool ret = false;
>  	u64 spte = 0ull;
>  
> @@ -2846,7 +2847,8 @@ static bool fast_page_fault(struct kvm_vcpu *vcpu, gva_t gva, int level,
>  		goto exit;
>  	}
>  
> -	if (!is_last_spte(spte, level))
> +	sp = page_header(__pa(iterator.sptep));
> +	if (!is_last_spte(spte, sp->role.level))
>  		goto exit;
>  
>  	/*
> @@ -2872,7 +2874,7 @@ static bool fast_page_fault(struct kvm_vcpu *vcpu, gva_t gva, int level,
>  	 * the gfn is not stable for indirect shadow page.
>  	 * See Documentation/virtual/kvm/locking.txt to get more detail.
>  	 */
> -	ret = fast_pf_fix_direct_spte(vcpu, iterator.sptep, spte);
> +	ret = fast_pf_fix_direct_spte(vcpu, sp, iterator.sptep, spte);
>  exit:
>  	trace_fast_page_fault(vcpu, gva, error_code, iterator.sptep,
>  			      spte, ret);
> -- 
> 1.8.1.4


Reviewed-by: Marcelo Tosatti <mtosatti@...hat.com>

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ