lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Tue, 8 Aug 2017 12:06:00 +0200
From:   Paolo Bonzini <pbonzini@...hat.com>
To:     Brijesh Singh <brijesh.singh@....com>, kvm@...r.kernel.org,
        linux-kernel@...r.kernel.org
Cc:     Radim Krčmář <rkrcmar@...hat.com>,
        Thomas Lendacky <thomas.lendacky@....com>
Subject: Re: [PATCH] KVM: SVM: Limit PFERR_NESTED_GUEST_PAGE error_code check
 to L1 guest

On 07/08/2017 21:11, Brijesh Singh wrote:
> Commit: 1472775 (kvm: svm: Add support for additional SVM NPF error codes)
> added new error code to aid nested page fault handling. The commit
> unprotect (kvm_mmu_unprotect_page) the page when we get a NFP due to
> guest page table walk where the page was marked RO.
> 
> Paolo highlighted a use case,  where an L0->L2 shadow nested page table
> is marked read-only, in particular when a page is read only in L1's nested
> page table. If such a page is accessed by L2 while walking page tables
> it can cause a nested page fault (page table walks are write accessed).
> However, after kvm_mmu_unprotect_page we may get another page fault, and
> again in an endless stream.
> 
> To cover this use case, we qualify the new error_code check with
> vcpu->arch.mmu_direct_map so that the error_code check would run on L1
> guest, and not the L2 guest. This would restrict it avoid hitting the above
> use case.
> 
> Cc: Paolo Bonzini <pbonzini@...hat.com>
> Cc: "Radim Krčmář" <rkrcmar@...hat.com>
> Cc: Thomas Lendacky <thomas.lendacky@....com>
> Signed-off-by: Brijesh Singh <brijesh.singh@....com>
> ---
> 
> See http://marc.info/?l=kvm&m=150153155519373&w=2 for detail discussion on the use case and code flow.
> 
>  arch/x86/kvm/mmu.c | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
> index 9b1dd11..4aaa4aa 100644
> --- a/arch/x86/kvm/mmu.c
> +++ b/arch/x86/kvm/mmu.c
> @@ -4839,7 +4839,8 @@ int kvm_mmu_page_fault(struct kvm_vcpu *vcpu, gva_t cr2, u64 error_code,
>  	 * Note: AMD only (since it supports the PFERR_GUEST_PAGE_MASK used
>  	 *       in PFERR_NEXT_GUEST_PAGE)
>  	 */
> -	if (error_code == PFERR_NESTED_GUEST_PAGE) {
> +	if (vcpu->arch.mmu.direct_map &&
> +		(error_code == PFERR_NESTED_GUEST_PAGE)) {
>  		kvm_mmu_unprotect_page(vcpu->kvm, gpa_to_gfn(cr2));
>  		return 1;
>  	}
> 


Thanks, queued for 4.14.

Paolo

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ