[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <996327789.31733050.1515450744981.JavaMail.zimbra@redhat.com>
Date: Mon, 8 Jan 2018 17:32:24 -0500 (EST)
From: Paolo Bonzini <pbonzini@...hat.com>
To: Jim Mattson <jmattson@...gle.com>
Cc: LKML <linux-kernel@...r.kernel.org>,
kvm list <kvm@...r.kernel.org>, aliguori@...zon.com,
Tom Lendacky <thomas.lendacky@....com>, dwmw@...zon.co.uk,
bp@...en8.de
Subject: Re: [PATCH 3/7] kvm: vmx: pass MSR_IA32_SPEC_CTRL and
MSR_IA32_PRED_CMD down to the guest
> I have:
>
> if (!have_spec_ctrl ||
> (!msr_info->host_initiated &&
> !guest_cpuid_has(vcpu, X86_FEATURE_SPEC_CTRL)))
> return 1;
> msr_info->data = to_vmx(vcpu)->msr_ia32_spec_ctrl;
> break;
> I have:
>
> if (!have_spec_ctrl ||
> (!msr_info->host_initiated &&
> !guest_cpuid_has(vcpu, X86_FEATURE_SPEC_CTRL)))
> return 1;
> to_vmx(vcpu)->msr_ia32_spec_ctrl = data;
> break;
Both better than mine.
> > + vmx_disable_intercept_for_msr(MSR_IA32_SPEC_CTRL, false);
> > + vmx_disable_intercept_for_msr(MSR_IA32_PRED_CMD, false);
>
> I have a lot of changes to MSR permission bitmap handling, but these
> intercepts should only be disabled when guest_cpuid_has(vcpu,
> X86_FEATURE_SPEC_CTRL).
That's harder to backport and not strictly necessary (just like
e.g. we don't check guest CPUID bits before emulation). I agree that
your version is better, but I think the above is fine as a minimal
fix.
> Here, I have:
>
> /*
> * If the guest is allowed to write to MSR_IA32_SPEC_CTRL,
> * store it on VM-exit.
> */
> if (guest_cpuid_has(vcpu, X86_FEATURE_SPEC_CTRL))
> add_autostore_msr(vmx, MSR_IA32_SPEC_CTRL);
> else
> clear_autostore_msr(vmx, MSR_IA32_SPEC_CTRL);
>
> /*
> * If the guest's IA32_SPEC_CTRL MSR doesn't match the host's
> * IA32_SPEC_CTRL MSR, then add the MSR to the atomic switch
> * MSRs, so that the guest value will be loaded on VM-entry
> * and the host value will be loaded on VM-exit.
> */
> if (vmx->msr_ia32_spec_ctrl != spec_ctrl_enabled())
> add_atomic_switch_msr(vmx, MSR_IA32_SPEC_CTRL,
> vmx->msr_ia32_spec_ctrl,
> spec_ctrl_enabled());
> else
> clear_atomic_switch_msr(vmx, MSR_IA32_SPEC_CTRL);
>
> > atomic_switch_perf_msrs(vmx);
> >
> > vmx_arm_hv_timer(vcpu);
> > @@ -9707,6 +9733,12 @@ static void __noclone vmx_vcpu_run(struct kvm_vcpu
> > *vcpu)
> > #endif
> > );
> >
> > + if (have_spec_ctrl) {
> > + rdmsrl(MSR_IA32_SPEC_CTRL, vmx->spec_ctrl);
> > + if (vmx->spec_ctrl)
> > + wrmsrl(MSR_IA32_SPEC_CTRL, 0);
> > + }
> > +
>
> I know the VM-exit MSR load and store lists are probably slower, but
> I'm a little uncomfortable restoring the host's IA32_SPEC_CTRL MSR
> late if the guest has it clear and the host has it set.
There is no indirect branch before though, isn't it?
Paolo
> > /* MSR_IA32_DEBUGCTLMSR is zeroed on vmexit. Restore it if needed
> > */
> > if (vmx->host_debugctlmsr)
> > update_debugctlmsr(vmx->host_debugctlmsr);
> > --
> > 1.8.3.1
> >
> >
>
Powered by blists - more mailing lists