[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <faedeeb06b63a115a1ab733b1226ae6822d2a907.camel@redhat.com>
Date: Thu, 20 Aug 2020 13:02:15 +0300
From: Maxim Levitsky <mlevitsk@...hat.com>
To: Paolo Bonzini <pbonzini@...hat.com>, kvm@...r.kernel.org
Cc: Jim Mattson <jmattson@...gle.com>, Joerg Roedel <joro@...tes.org>,
Borislav Petkov <bp@...en8.de>,
Thomas Gleixner <tglx@...utronix.de>,
"H. Peter Anvin" <hpa@...or.com>,
Vitaly Kuznetsov <vkuznets@...hat.com>,
"open list:X86 ARCHITECTURE (32-BIT AND 64-BIT)"
<linux-kernel@...r.kernel.org>,
"maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT)" <x86@...nel.org>,
Ingo Molnar <mingo@...hat.com>,
Wanpeng Li <wanpengli@...cent.com>,
Sean Christopherson <sean.j.christopherson@...el.com>
Subject: Re: [PATCH 5/8] KVM: nSVM: implement ondemand allocation of the
nested state
On Thu, 2020-08-20 at 11:58 +0200, Paolo Bonzini wrote:
> On 20/08/20 11:13, Maxim Levitsky wrote:
> > @@ -3912,6 +3914,14 @@ static int svm_pre_leave_smm(struct kvm_vcpu *vcpu, const char *smstate)
> > vmcb_gpa = GET_SMSTATE(u64, smstate, 0x7ee0);
> >
> > if (guest) {
> > + /*
> > + * This can happen if SVM was not enabled prior to #SMI,
> > + * but guest corrupted the #SMI state and marked it as
> > + * enabled it there
> > + */
> > + if (!svm->nested.initialized)
> > + return 1;
> > +
> > if (kvm_vcpu_map(&svm->vcpu, gpa_to_gfn(vmcb_gpa), &map) == -EINVAL)
> > return 1;
>
> This can also happen if you live migrate while in SMM (EFER.SVME=0).
> You need to check for the SVME bit in the SMM state save area, and:
>
> 1) triple fault if it is clear
>
> 2) call svm_allocate_nested if it is set.
>
> Paolo
>
Makes sense, will do.
Best regards,
Maxim Levitsky
Powered by blists - more mailing lists