lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 7 Jan 2021 18:32:11 -0600
From:   Michael Roth <michael.roth@....com>
To:     Sean Christopherson <seanjc@...gle.com>
Cc:     kvm@...r.kernel.org, Paolo Bonzini <pbonzini@...hat.com>,
        Andy Lutomirski <luto@...capital.net>,
        Vitaly Kuznetsov <vkuznets@...hat.com>,
        Wanpeng Li <wanpengli@...cent.com>,
        Jim Mattson <jmattson@...gle.com>,
        Joerg Roedel <joro@...tes.org>,
        Thomas Gleixner <tglx@...utronix.de>,
        Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
        x86@...nel.org, "H . Peter Anvin" <hpa@...or.com>,
        linux-kernel@...r.kernel.org,
        Tom Lendacky <thomas.lendacky@....com>
Subject: Re: [PATCH v3 1/3] KVM: SVM: use vmsave/vmload for saving/restoring
 additional host state

On Tue, Jan 05, 2021 at 09:20:03AM -0800, Sean Christopherson wrote:
> On Tue, Jan 05, 2021, Michael Roth wrote:
> > @@ -3703,16 +3688,9 @@ static noinstr void svm_vcpu_enter_exit(struct kvm_vcpu *vcpu,
> >  	if (sev_es_guest(svm->vcpu.kvm)) {
> >  		__svm_sev_es_vcpu_run(svm->vmcb_pa);
> >  	} else {
> > -		__svm_vcpu_run(svm->vmcb_pa, (unsigned long *)&svm->vcpu.arch.regs);
> > -
> > -#ifdef CONFIG_X86_64
> > -		native_wrmsrl(MSR_GS_BASE, svm->host.gs_base);
> > -#else
> > -		loadsegment(fs, svm->host.fs);
> > -#ifndef CONFIG_X86_32_LAZY_GS
> > -		loadsegment(gs, svm->host.gs);
> > -#endif
> > -#endif
> > +		__svm_vcpu_run(svm->vmcb_pa, (unsigned long *)&svm->vcpu.arch.regs,
> > +			       page_to_phys(per_cpu(svm_data,
> > +						    vcpu->cpu)->save_area));
> 
> Does this need to use __sme_page_pa()?

Oddly enough the current patch seems to work even with SME enabled. Not
sure why though since as Tom pointed out we do use it elsewhere with the
SME bit set. But should be setting it either way.

> 
> >  	}
> >  
> >  	/*
> 
> ...
> 
> > diff --git a/arch/x86/kvm/svm/vmenter.S b/arch/x86/kvm/svm/vmenter.S
> > index 6feb8c08f45a..89f4e8e7bf0e 100644
> > --- a/arch/x86/kvm/svm/vmenter.S
> > +++ b/arch/x86/kvm/svm/vmenter.S
> > @@ -33,6 +33,7 @@
> >   * __svm_vcpu_run - Run a vCPU via a transition to SVM guest mode
> >   * @vmcb_pa:	unsigned long
> >   * @regs:	unsigned long * (to guest registers)
> > + * @hostsa_pa:	unsigned long
> >   */
> >  SYM_FUNC_START(__svm_vcpu_run)
> >  	push %_ASM_BP
> > @@ -47,6 +48,9 @@ SYM_FUNC_START(__svm_vcpu_run)
> >  #endif
> >  	push %_ASM_BX
> >  
> > +	/* Save @hostsa_pa */
> > +	push %_ASM_ARG3
> > +
> >  	/* Save @regs. */
> >  	push %_ASM_ARG2
> >  
> > @@ -154,6 +158,12 @@ SYM_FUNC_START(__svm_vcpu_run)
> >  	xor %r15d, %r15d
> >  #endif
> >  
> > +	/* "POP" @hostsa_pa to RAX. */
> > +	pop %_ASM_AX
> > +
> > +	/* Restore host user state and FS/GS base */
> > +	vmload %_ASM_AX
> 
> This VMLOAD needs the "handle fault on reboot" goo.  Seeing the code, I think

Ah, yes, I overlooked that with the rework.

> I'd prefer to handle this in C code, especially if Paolo takes the svm_ops.h
> patch[*].  Actually, I think with that patch it'd make sense to move the
> existing VMSAVE+VMLOAD for the guest into svm.c, too.  And completely unrelated,
> the fault handling in svm/vmenter.S can be cleaned up a smidge to eliminate the
> JMPs.
> 
> Paolo, what do you think about me folding these patches into my series to do the
> above cleanups?  And maybe sending a pull request for the end result?  (I'd also
> like to add on a patch to use the user return MSR mechanism for MSR_TSC_AUX).

No complaints on my end at least :) But happy to send a v4 with SME bit fix
and reboot handling if you think that's worthwhile (and other suggested changes
as well, but not sure exactly what you have in mind there). Can also help with
any testing of course.

Thanks,

Mike

> 
> [*] https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flkml.kernel.org%2Fr%2F20201231002702.2223707-8-seanjc%40google.com&amp;data=04%7C01%7Cmichael.roth%40amd.com%7C78b8a6cc557c4b7cda2e08d8b19e28e4%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637454640153346851%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=r2dX27RJ569gloShKvha%2BUtcf0%2B5vNc%2Fn6E1dREJDm0%3D&amp;reserved=0
> 
> > +
> >  	pop %_ASM_BX
> >  
> >  #ifdef CONFIG_X86_64
> > -- 
> > 2.25.1
> > 

Powered by blists - more mailing lists