[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YwOzxYLMeFuN23W+@google.com>
Date: Mon, 22 Aug 2022 16:50:13 +0000
From: Sean Christopherson <seanjc@...gle.com>
To: Vitaly Kuznetsov <vkuznets@...hat.com>
Cc: kvm@...r.kernel.org, Paolo Bonzini <pbonzini@...hat.com>,
Anirudh Rayabharam <anrayabh@...ux.microsoft.com>,
Wanpeng Li <wanpengli@...cent.com>,
Jim Mattson <jmattson@...gle.com>,
Maxim Levitsky <mlevitsk@...hat.com>,
Nathan Chancellor <nathan@...nel.org>,
Michael Kelley <mikelley@...rosoft.com>,
linux-hyperv@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v5 09/26] KVM: VMX: nVMX: Support TSC scaling and
PERF_GLOBAL_CTRL with enlightened VMCS
On Mon, Aug 22, 2022, Vitaly Kuznetsov wrote:
> Sean Christopherson <seanjc@...gle.com> writes:
> > But that also raises the question of whether or not KVM should honor hyperv_enabled
> > when filtering MSRs. Same question for nested VM-Enter. nested_enlightened_vmentry()
> > will "fail" without an assist page, and the guest can't set the assist page without
> > hyperv_enabled==true, but nothing prevents the host from stuffing the assist page.
>
> The case sounds more like a misbehaving VMM to me. It would probably be
> better to fail nested_enlightened_vmentry() immediately on !hyperv_enabled.
Hmm, sort of. If KVM fails explicitly fails nested VM-Enter, then allowing the
guest to read the VMX MSRs with the same buggy setup is odd, e.g. nested VMX is
effectively unsupported at that point since there is nothing the guest can do to
make nested VM-Enter succeed. Extending the "fail VM-Enter" behavior would be to
inject #GP on RDMSR, and at that point KVM is well into "made up architecture"
behavior.
All in all, I don't think it's worth forcing the issue, even though I do agree that
the VMM is being weird if it's enabling KVM_CAP_HYPERV_ENLIGHTENED_VMCS but not
advertising Hyper-V.
> > If we fix the kvm_hv_set_cpuid() allocation failure, then we can also guarantee
> > that vcpu->arch.hyperv is non-NULL if vcpu->arch.hyperv_enabled==true. And then
> > we can add gate guest eVMCS flow on hyperv_enabled, and evmcs_get_unsupported_ctls()
> > can then WARN if hv_vcpu is NULL.
> >
>
> Alternatively, we can just KVM_BUG_ON() in kvm_hv_set_cpuid() when
> allocation fails, at least for the time being as the VM is likely
> useless anyway.
I'd prefer not to use KVM_BUG_ON() in this case. A proper fix isn't that much
more code, and this isn't a KVM bug unless we conciously make it one :-)
> > Assuming I'm not overlooking something, I'll fold in yet more patches.
> >
>
> Thanks for the thorough review here and don't hesitate to speak up when
> you think it's too much of a change to do upon queueing)
Heh, this definitely snowballed beyond "fixup on queue". Let's sort out how to
address the filtering issue and then decide how to handle v6.
Powered by blists - more mailing lists