lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 18 Jul 2013 08:31:54 +0300
From:	Gleb Natapov <gleb@...hat.com>
To:	Xiao Guangrong <xiaoguangrong@...ux.vnet.ibm.com>
Cc:	markus@...ppelsdorf.de, mtosatti@...hat.com, pbonzini@...hat.com,
	linux-kernel@...r.kernel.org, kvm@...r.kernel.org
Subject: Re: [PATCH] KVM: MMU: avoid fast page fault fixing mmio page fault

On Thu, Jul 18, 2013 at 12:52:37PM +0800, Xiao Guangrong wrote:
> Currently, fast page fault tries to fix mmio page fault when the
> generation number is invalid (spte.gen != kvm.gen) and returns to
> guest to retry the fault since it sees the last spte is nonpresent
> which causes infinity loop
> 
> It can be triggered only on AMD host since the mmio page fault is
> recognized as ept-misconfig
> 
We still call into regular page fault handler from ept-misconfig
handler, but fake zero error_code we provide makes page_fault_can_be_fast()
return false.

Shouldn't shadow paging trigger this too? I haven't encountered this on
Intel without ept.

> Fix it by filtering the mmio page fault out in page_fault_can_be_fast
> 
> Reported-by: Markus Trippelsdorf <markus@...ppelsdorf.de>
> Tested-by: Markus Trippelsdorf <markus@...ppelsdorf.de>
> Signed-off-by: Xiao Guangrong <xiaoguangrong@...ux.vnet.ibm.com>
> ---
>  arch/x86/kvm/mmu.c | 7 +++++++
>  1 file changed, 7 insertions(+)
> 
> diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
> index bf7af1e..3a9493a 100644
> --- a/arch/x86/kvm/mmu.c
> +++ b/arch/x86/kvm/mmu.c
> @@ -2811,6 +2811,13 @@ exit:
>  static bool page_fault_can_be_fast(struct kvm_vcpu *vcpu, u32 error_code)
>  {
>  	/*
> +	 * Do not fix the mmio spte with invalid generation number which
> +	 * need to be updated by slow page fault path.
> +	 */
> +	if (unlikely(error_code & PFERR_RSVD_MASK))
> +		return false;
> +
> +	/*
>  	 * #PF can be fast only if the shadow page table is present and it
>  	 * is caused by write-protect, that means we just need change the
>  	 * W bit of the spte which can be done out of mmu-lock.
> -- 
> 1.8.1.4

--
			Gleb.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists