[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1e5284fd-c426-649a-becb-dcff3e2afe57@amd.com>
Date: Thu, 7 Jan 2021 09:47:52 -0600
From: Tom Lendacky <thomas.lendacky@....com>
To: Sean Christopherson <seanjc@...gle.com>,
Michael Roth <michael.roth@....com>
Cc: kvm@...r.kernel.org, Paolo Bonzini <pbonzini@...hat.com>,
Andy Lutomirski <luto@...capital.net>,
Vitaly Kuznetsov <vkuznets@...hat.com>,
Wanpeng Li <wanpengli@...cent.com>,
Jim Mattson <jmattson@...gle.com>,
Joerg Roedel <joro@...tes.org>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
x86@...nel.org, "H . Peter Anvin" <hpa@...or.com>,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH v3 1/3] KVM: SVM: use vmsave/vmload for saving/restoring
additional host state
On 1/7/21 9:32 AM, Tom Lendacky wrote:
> On 1/5/21 11:20 AM, Sean Christopherson wrote:
>> On Tue, Jan 05, 2021, Michael Roth wrote:
>>> @@ -3703,16 +3688,9 @@ static noinstr void svm_vcpu_enter_exit(struct
>>> kvm_vcpu *vcpu,
>>> if (sev_es_guest(svm->vcpu.kvm)) {
>>> __svm_sev_es_vcpu_run(svm->vmcb_pa);
>>> } else {
>>> - __svm_vcpu_run(svm->vmcb_pa, (unsigned long
>>> *)&svm->vcpu.arch.regs);
>>> -
>>> -#ifdef CONFIG_X86_64
>>> - native_wrmsrl(MSR_GS_BASE, svm->host.gs_base);
>>> -#else
>>> - loadsegment(fs, svm->host.fs);
>>> -#ifndef CONFIG_X86_32_LAZY_GS
>>> - loadsegment(gs, svm->host.gs);
>>> -#endif
>>> -#endif
>>> + __svm_vcpu_run(svm->vmcb_pa, (unsigned long
>>> *)&svm->vcpu.arch.regs,
>>> + page_to_phys(per_cpu(svm_data,
>>> + vcpu->cpu)->save_area));
>>
>> Does this need to use __sme_page_pa()?
>
> Yes, it should now. The SEV-ES support added the SME encryption bit to the
> MSR_VM_HSAVE_PA MSR, so we need to be consistent in how the data is read
> and written.
Oh, and also in svm_vcpu_load().
Thanks,
Tom
> > Thanks,
> Tom
>
>>
>>> }
>>> /*
>>
>> ...
>>
>>> diff --git a/arch/x86/kvm/svm/vmenter.S b/arch/x86/kvm/svm/vmenter.S
>>> index 6feb8c08f45a..89f4e8e7bf0e 100644
>>> --- a/arch/x86/kvm/svm/vmenter.S
>>> +++ b/arch/x86/kvm/svm/vmenter.S
>>> @@ -33,6 +33,7 @@
>>> * __svm_vcpu_run - Run a vCPU via a transition to SVM guest mode
>>> * @vmcb_pa: unsigned long
>>> * @regs: unsigned long * (to guest registers)
>>> + * @hostsa_pa: unsigned long
>>> */
>>> SYM_FUNC_START(__svm_vcpu_run)
>>> push %_ASM_BP
>>> @@ -47,6 +48,9 @@ SYM_FUNC_START(__svm_vcpu_run)
>>> #endif
>>> push %_ASM_BX
>>> + /* Save @hostsa_pa */
>>> + push %_ASM_ARG3
>>> +
>>> /* Save @regs. */
>>> push %_ASM_ARG2
>>> @@ -154,6 +158,12 @@ SYM_FUNC_START(__svm_vcpu_run)
>>> xor %r15d, %r15d
>>> #endif
>>> + /* "POP" @hostsa_pa to RAX. */
>>> + pop %_ASM_AX
>>> +
>>> + /* Restore host user state and FS/GS base */
>>> + vmload %_ASM_AX
>>
>> This VMLOAD needs the "handle fault on reboot" goo. Seeing the code, I
>> think
>> I'd prefer to handle this in C code, especially if Paolo takes the
>> svm_ops.h
>> patch[*]. Actually, I think with that patch it'd make sense to move the
>> existing VMSAVE+VMLOAD for the guest into svm.c, too. And completely
>> unrelated,
>> the fault handling in svm/vmenter.S can be cleaned up a smidge to
>> eliminate the
>> JMPs.
>>
>> Paolo, what do you think about me folding these patches into my series
>> to do the
>> above cleanups? And maybe sending a pull request for the end result?
>> (I'd also
>> like to add on a patch to use the user return MSR mechanism for
>> MSR_TSC_AUX).
>>
>> [*]
>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flkml.kernel.org%2Fr%2F20201231002702.2223707-8-seanjc%40google.com&data=04%7C01%7Cthomas.lendacky%40amd.com%7Ca130e2c4b40442b8532108d8b321a57b%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637456304409010405%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=6vWmBbFFP0aOaZr31I7WDhpmzL4A%2FY%2BuzvvZrmDHpWI%3D&reserved=0
>>
>>
>>> +
>>> pop %_ASM_BX
>>> #ifdef CONFIG_X86_64
>>> --
>>> 2.25.1
>>>
Powered by blists - more mailing lists