[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CALMp9eTo=fTmQwcfumeU0_R49ePy+3-rNRan7T3nBeZeKza9MA@mail.gmail.com>
Date: Mon, 8 Jan 2018 15:19:40 -0800
From: Jim Mattson <jmattson@...gle.com>
To: Paolo Bonzini <pbonzini@...hat.com>
Cc: LKML <linux-kernel@...r.kernel.org>,
kvm list <kvm@...r.kernel.org>, aliguori@...zon.com,
Tom Lendacky <thomas.lendacky@....com>, dwmw@...zon.co.uk,
bp@...en8.de
Subject: Re: [PATCH 3/7] kvm: vmx: pass MSR_IA32_SPEC_CTRL and
MSR_IA32_PRED_CMD down to the guest
On Mon, Jan 8, 2018 at 2:32 PM, Paolo Bonzini <pbonzini@...hat.com> wrote:
>
>> I have:
>>
>> if (!have_spec_ctrl ||
>> (!msr_info->host_initiated &&
>> !guest_cpuid_has(vcpu, X86_FEATURE_SPEC_CTRL)))
>> return 1;
>> msr_info->data = to_vmx(vcpu)->msr_ia32_spec_ctrl;
>> break;
>
>> I have:
>>
>> if (!have_spec_ctrl ||
>> (!msr_info->host_initiated &&
>> !guest_cpuid_has(vcpu, X86_FEATURE_SPEC_CTRL)))
>> return 1;
>> to_vmx(vcpu)->msr_ia32_spec_ctrl = data;
>> break;
>
> Both better than mine.
>
>> > + vmx_disable_intercept_for_msr(MSR_IA32_SPEC_CTRL, false);
>> > + vmx_disable_intercept_for_msr(MSR_IA32_PRED_CMD, false);
>>
>> I have a lot of changes to MSR permission bitmap handling, but these
>> intercepts should only be disabled when guest_cpuid_has(vcpu,
>> X86_FEATURE_SPEC_CTRL).
>
> That's harder to backport and not strictly necessary (just like
> e.g. we don't check guest CPUID bits before emulation). I agree that
> your version is better, but I think the above is fine as a minimal
> fix.
Due to the impacts that spec_ctrl has on the neighboring hyperthread,
one may want to disable MSRs 0x48 and 0x49 for a particular VM. We do
this by masking off CPUID.(EAX=7, ECX=0).EDX[26] and CPUID.(EAX=7,
ECX=0).EDX[27] from the userspace agent. However, with your patch,
*any* VCPU gets unrestricted access to these MSRs, without any
mechanism for disabling them.
>> Here, I have:
>>
>> /*
>> * If the guest is allowed to write to MSR_IA32_SPEC_CTRL,
>> * store it on VM-exit.
>> */
>> if (guest_cpuid_has(vcpu, X86_FEATURE_SPEC_CTRL))
>> add_autostore_msr(vmx, MSR_IA32_SPEC_CTRL);
>> else
>> clear_autostore_msr(vmx, MSR_IA32_SPEC_CTRL);
>>
>> /*
>> * If the guest's IA32_SPEC_CTRL MSR doesn't match the host's
>> * IA32_SPEC_CTRL MSR, then add the MSR to the atomic switch
>> * MSRs, so that the guest value will be loaded on VM-entry
>> * and the host value will be loaded on VM-exit.
>> */
>> if (vmx->msr_ia32_spec_ctrl != spec_ctrl_enabled())
>> add_atomic_switch_msr(vmx, MSR_IA32_SPEC_CTRL,
>> vmx->msr_ia32_spec_ctrl,
>> spec_ctrl_enabled());
>> else
>> clear_atomic_switch_msr(vmx, MSR_IA32_SPEC_CTRL);
>>
>> > atomic_switch_perf_msrs(vmx);
>> >
>> > vmx_arm_hv_timer(vcpu);
>> > @@ -9707,6 +9733,12 @@ static void __noclone vmx_vcpu_run(struct kvm_vcpu
>> > *vcpu)
>> > #endif
>> > );
>> >
>> > + if (have_spec_ctrl) {
>> > + rdmsrl(MSR_IA32_SPEC_CTRL, vmx->spec_ctrl);
>> > + if (vmx->spec_ctrl)
>> > + wrmsrl(MSR_IA32_SPEC_CTRL, 0);
>> > + }
>> > +
>>
>> I know the VM-exit MSR load and store lists are probably slower, but
>> I'm a little uncomfortable restoring the host's IA32_SPEC_CTRL MSR
>> late if the guest has it clear and the host has it set.
>
> There is no indirect branch before though, isn't it?
I guess that depends on how you define "before."
> Paolo
>
>> > /* MSR_IA32_DEBUGCTLMSR is zeroed on vmexit. Restore it if needed
>> > */
>> > if (vmx->host_debugctlmsr)
>> > update_debugctlmsr(vmx->host_debugctlmsr);
>> > --
>> > 1.8.3.1
>> >
>> >
>>
Powered by blists - more mailing lists