[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YTkzUaFD664+9WB+@google.com>
Date: Wed, 8 Sep 2021 22:04:01 +0000
From: Sean Christopherson <seanjc@...gle.com>
To: Maxim Levitsky <mlevitsk@...hat.com>
Cc: kvm@...r.kernel.org, Joerg Roedel <joro@...tes.org>,
Thomas Gleixner <tglx@...utronix.de>,
Jim Mattson <jmattson@...gle.com>,
"maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT)" <x86@...nel.org>,
"open list:X86 ARCHITECTURE (32-BIT AND 64-BIT)"
<linux-kernel@...r.kernel.org>, Wanpeng Li <wanpengli@...cent.com>,
Paolo Bonzini <pbonzini@...hat.com>,
Borislav Petkov <bp@...en8.de>,
"H. Peter Anvin" <hpa@...or.com>, Ingo Molnar <mingo@...hat.com>,
Vitaly Kuznetsov <vkuznets@...hat.com>
Subject: Re: [PATCH v2 1/3] KVM: nSVM: restore the L1 host state prior to
resuming a nested guest on SMM exit
On Mon, Aug 23, 2021, Maxim Levitsky wrote:
> If the guest is entered prior to restoring the host save area,
> the guest entry code might see incorrect L1 state (e.g paging state).
>
> Signed-off-by: Maxim Levitsky <mlevitsk@...hat.com>
> ---
> arch/x86/kvm/svm/svm.c | 23 +++++++++++++----------
> 1 file changed, 13 insertions(+), 10 deletions(-)
>
> diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
> index 1a70e11f0487..ea7a4dacd42f 100644
> --- a/arch/x86/kvm/svm/svm.c
> +++ b/arch/x86/kvm/svm/svm.c
> @@ -4347,27 +4347,30 @@ static int svm_leave_smm(struct kvm_vcpu *vcpu, const char *smstate)
> gpa_to_gfn(vmcb12_gpa), &map) == -EINVAL)
> return 1;
>
> - if (svm_allocate_nested(svm))
> + if (kvm_vcpu_map(vcpu, gpa_to_gfn(svm->nested.hsave_msr),
> + &map_save) == -EINVAL)
> return 1;
Returning here will neglect to unmap "map".
>
> - vmcb12 = map.hva;
> -
> - nested_load_control_from_vmcb12(svm, &vmcb12->control);
> -
> - ret = enter_svm_guest_mode(vcpu, vmcb12_gpa, vmcb12);
> - kvm_vcpu_unmap(vcpu, &map, true);
> + if (svm_allocate_nested(svm))
> + return 1;
Ditto here for both "map" and "map_save", though it looks like there's a
pre-existing bug if svm_allocate_nested() fails. If you add a prep cleanup patch
to remove the statement nesting (between the bug fix and this patch), it will make
handling this a lot easier, e.g.
static int svm_leave_smm(struct kvm_vcpu *vcpu, const char *smstate)
{
struct vcpu_svm *svm = to_svm(vcpu);
struct kvm_host_map map, map_save;
u64 saved_efer, vmcb12_gpa;
struct vmcb *vmcb12;
int ret;
if (!guest_cpuid_has(vcpu, X86_FEATURE_LM))
return 0;
/* Non-zero if SMI arrived while vCPU was in guest mode. */
if (!GET_SMSTATE(u64, smstate, 0x7ed8))
return 0;
if (!guest_cpuid_has(vcpu, X86_FEATURE_SVM))
return 1;
saved_efer = GET_SMSTATE(u64, smstate, 0x7ed0);
if (!(saved_efer & EFER_SVME))
return 1;
vmcb12_gpa = GET_SMSTATE(u64, smstate, 0x7ee0);
if (kvm_vcpu_map(vcpu, gpa_to_gfn(vmcb12_gpa), &map) == -EINVAL)
return 1;
ret = 1;
if (kvm_vcpu_map(vcpu, gpa_to_gfn(svm->nested.hsave_msr), &map_save) == -EINVAL)
goto unmap_map;
if (svm_allocate_nested(svm))
goto unmap_save;
/*
* Restore L1 host state from L1 HSAVE area as VMCB01 was
* used during SMM (see svm_enter_smm())
*/
svm_copy_vmrun_state(&svm->vmcb01.ptr->save,
map_save.hva + 0x400);
/*
* Restore L2 state
*/
vmcb12 = map.hva;
nested_load_control_from_vmcb12(svm, &vmcb12->control);
ret = enter_svm_guest_mode(vcpu, vmcb12_gpa, vmcb12);
unmap_save;
kvm_vcpu_unmap(vcpu, &map_save, true);
unmap_map:
kvm_vcpu_unmap(vcpu, &map, true);
return 1;
}
Powered by blists - more mailing lists