[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <e84d46e860bf2575564bb144eeb57230405ad36e.camel@redhat.com>
Date: Thu, 21 Jul 2022 14:54:15 +0300
From: Maxim Levitsky <mlevitsk@...hat.com>
To: Sean Christopherson <seanjc@...gle.com>
Cc: kvm@...r.kernel.org, x86@...nel.org,
Kees Cook <keescook@...omium.org>,
Dave Hansen <dave.hansen@...ux.intel.com>,
linux-kernel@...r.kernel.org, "H. Peter Anvin" <hpa@...or.com>,
Borislav Petkov <bp@...en8.de>, Joerg Roedel <joro@...tes.org>,
Ingo Molnar <mingo@...hat.com>,
Paolo Bonzini <pbonzini@...hat.com>,
Thomas Gleixner <tglx@...utronix.de>,
Vitaly Kuznetsov <vkuznets@...hat.com>,
Wanpeng Li <wanpengli@...cent.com>,
Jim Mattson <jmattson@...gle.com>
Subject: Re: [PATCH v2 09/11] KVM: x86: emulator/smm: use smram struct for
64 bit smram load/restore
On Thu, 2022-07-21 at 00:38 +0000, Sean Christopherson wrote:
> On Tue, Jun 21, 2022, Maxim Levitsky wrote:
> > Use kvm_smram_state_64 struct to save/restore the 64 bit SMM state
> > (used when X86_FEATURE_LM is present in the guest CPUID,
> > regardless of 32-bitness of the guest).
> >
> > Signed-off-by: Maxim Levitsky <mlevitsk@...hat.com>
> > ---
> > @@ -9814,7 +9805,7 @@ static void enter_smm(struct kvm_vcpu *vcpu)
> > memset(buf, 0, 512);
> > #ifdef CONFIG_X86_64
> > if (guest_cpuid_has(vcpu, X86_FEATURE_LM))
> > - enter_smm_save_state_64(vcpu, buf);
> > + enter_smm_save_state_64(vcpu, (struct kvm_smram_state_64 *)buf);
> > else
> > #endif
> > enter_smm_save_state_32(vcpu, (struct kvm_smram_state_32 *)buf);
>
> Hrm, I _love_ the approach overall, but I really dislike having to cast an
> arbitrary buffer, especially in the SVM code.
>
> Aha! Rather than keeping a buffer and casting, create a union to hold everything:
>
> union kvm_smram {
> struct kvm_smram_state_64 smram64;
> struct kvm_smram_state_32 smram32;
> u8 bytes[512];
> };
Great idea, will do in v3.
>
> and then enter_smm() becomes:
>
> static void enter_smm(struct kvm_vcpu *vcpu)
> {
> struct kvm_segment cs, ds;
> struct desc_ptr dt;
> unsigned long cr0;
>
> union kvm_smram smram;
>
> BUILD_BUG_ON(sizeof(smram) != 512);
>
> memset(smram.bytes, 0, sizeof(smram));
> #ifdef CONFIG_X86_64
> if (guest_cpuid_has(vcpu, X86_FEATURE_LM))
> enter_smm_save_state_64(vcpu, &smram.smram64);
> else
> #endif
> enter_smm_save_state_32(vcpu, &smram.smram32);
>
> /*
> * Give enter_smm() a chance to make ISA-specific changes to the vCPU
> * state (e.g. leave guest mode) after we've saved the state into the
> * SMM state-save area.
> */
> static_call(kvm_x86_enter_smm)(vcpu, &smram);
>
> kvm_smm_changed(vcpu, true);
> kvm_vcpu_write_guest(vcpu, vcpu->arch.smbase + 0xfe00, smram.bytes, sizeof(smram));
>
> and em_rsm() gets similar treatment. Then the vendor code doesn't have to cast,
> e.g. SVM can do:
>
> smram->smram64.svm_guest_flag = 1;
> smram->smram64.svm_guest_vmcb_gpa = svm->nested.vmcb12_gpa;
>
> That way we don't have to refactor this all again if we want to use SMRAM to save
> something on Intel for VMX (though I agree with Jim that that's probably a bad idea).
>
Best regards,
Maxim Levitsky
Powered by blists - more mailing lists