lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 06 Apr 2021 13:12:13 +0300
From:   Maxim Levitsky <mlevitsk@...hat.com>
To:     Sean Christopherson <seanjc@...gle.com>
Cc:     kvm@...r.kernel.org,
        "maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT)" <x86@...nel.org>,
        Jim Mattson <jmattson@...gle.com>,
        Vitaly Kuznetsov <vkuznets@...hat.com>,
        Wanpeng Li <wanpengli@...cent.com>,
        Joerg Roedel <joro@...tes.org>,
        "H. Peter Anvin" <hpa@...or.com>,
        "open list:X86 ARCHITECTURE (32-BIT AND 64-BIT)" 
        <linux-kernel@...r.kernel.org>,
        Paolo Bonzini <pbonzini@...hat.com>,
        Thomas Gleixner <tglx@...utronix.de>,
        Jonathan Corbet <corbet@....net>,
        Borislav Petkov <bp@...en8.de>, Ingo Molnar <mingo@...hat.com>,
        "open list:DOCUMENTATION" <linux-doc@...r.kernel.org>
Subject: Re: [PATCH 5/6] KVM: nSVM: avoid loading PDPTRs after migration
 when possible

On Mon, 2021-04-05 at 17:01 +0000, Sean Christopherson wrote:
> On Thu, Apr 01, 2021, Maxim Levitsky wrote:
> > if new KVM_*_SREGS2 ioctls are used, the PDPTRs are
> > part of the migration state and thus are loaded
> > by those ioctls.
> > 
> > Signed-off-by: Maxim Levitsky <mlevitsk@...hat.com>
> > ---
> >  arch/x86/kvm/svm/nested.c | 15 +++++++++++++--
> >  1 file changed, 13 insertions(+), 2 deletions(-)
> > 
> > diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
> > index ac5e3e17bda4..b94916548cfa 100644
> > --- a/arch/x86/kvm/svm/nested.c
> > +++ b/arch/x86/kvm/svm/nested.c
> > @@ -373,10 +373,9 @@ static int nested_svm_load_cr3(struct kvm_vcpu *vcpu, unsigned long cr3,
> >  		return -EINVAL;
> >  
> >  	if (!nested_npt && is_pae_paging(vcpu) &&
> > -	    (cr3 != kvm_read_cr3(vcpu) || pdptrs_changed(vcpu))) {
> > +	    (cr3 != kvm_read_cr3(vcpu) || !kvm_register_is_available(vcpu, VCPU_EXREG_PDPTR)))
> >  		if (CC(!load_pdptrs(vcpu, vcpu->arch.walk_mmu, cr3)))
> 
> What if we ditch the optimizations[*] altogether and just do:
> 
> 	if (!nested_npt && is_pae_paging(vcpu) &&
> 	    CC(!load_pdptrs(vcpu, vcpu->arch.walk_mmu, cr3))
> 		return -EINVAL;
> 
> Won't that obviate the need for KVM_{GET|SET}_SREGS2 since KVM will always load
> the PDPTRs from memory?  IMO, nested migration with shadowing paging doesn't
> warrant this level of optimization complexity.

Its not an optimization, it was done to be 100% within the X86 spec. 
PDPTRs are internal cpu registers which are loaded only when
CR3/CR0/CR4 are written by the guest, guest entry loads CR3, or 
when guest exit loads CR3 (I checked both Intel and AMD manuals).

In addition to that when NPT is enabled, AMD drops this siliness and 
just treats PDPTRs as normal paging entries, while on Intel side 
when EPT is enabled, PDPTRs are stored in VMCS.

Nested migration is neither of these cases, thus PDPTRs should be 
stored out of band.
Same for non nested migration.

This was requested by Jim Mattson, and I went ahead and 
implemented it, even though I do understand that no sane OS 
relies on PDPTRs to be unsync v.s the actual page
table containing them.

Best regards,
	Maxim Levitsky


> 
> [*] For some definitions of "optimization", since the extra pdptrs_changed()
>     check in the existing code is likely a net negative.
> 
> >  			return -EINVAL;
> > -	}
> >  
> >  	/*
> >  	 * TODO: optimize unconditional TLB flush/MMU sync here and in


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ