[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <14f97819-11af-5072-d4f2-a7b4f16d734e@suse.com>
Date: Mon, 21 Aug 2023 19:46:17 +0300
From: Nikolay Borisov <nik.borisov@...e.com>
To: Sean Christopherson <seanjc@...gle.com>,
Josh Poimboeuf <jpoimboe@...nel.org>
Cc: Andrew Cooper <andrew.cooper3@...rix.com>, x86@...nel.org,
linux-kernel@...r.kernel.org, Borislav Petkov <bp@...en8.de>,
Peter Zijlstra <peterz@...radead.org>,
Babu Moger <babu.moger@....com>,
Paolo Bonzini <pbonzini@...hat.com>, David.Kaplan@....com,
gregkh@...uxfoundation.org, Thomas Gleixner <tglx@...utronix.de>
Subject: Re: [PATCH 03/22] KVM: x86: Support IBPB_BRTYPE and SBPB
On 21.08.23 г. 19:35 ч., Sean Christopherson wrote:
> On Mon, Aug 21, 2023, Josh Poimboeuf wrote:
>> On Mon, Aug 21, 2023 at 10:34:38AM +0100, Andrew Cooper wrote:
>>> On 21/08/2023 2:19 am, Josh Poimboeuf wrote:
>>>> The IBPB_BRTYPE and SBPB CPUID bits aren't set by HW.
>>>
>>> "Current hardware".
>>>
>>>> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
>>>> index c381770bcbf1..dd7472121142 100644
>>>> --- a/arch/x86/kvm/x86.c
>>>> +++ b/arch/x86/kvm/x86.c
>>>> @@ -3676,12 +3676,13 @@ int kvm_set_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
>>>> if (!msr_info->host_initiated && !guest_has_pred_cmd_msr(vcpu))
>>>> return 1;
>>>>
>>>> - if (!boot_cpu_has(X86_FEATURE_IBPB) || (data & ~PRED_CMD_IBPB))
>>>> + if (boot_cpu_has(X86_FEATURE_IBPB) && data == PRED_CMD_IBPB)
>>>> + wrmsrl(MSR_IA32_PRED_CMD, PRED_CMD_IBPB);
>>>> + else if (boot_cpu_has(X86_FEATURE_SBPB) && data == PRED_CMD_SBPB)
>>>> + wrmsrl(MSR_IA32_PRED_CMD, PRED_CMD_SBPB);
>>>> + else if (data)
>>>> return 1;
>>>
>>> SBPB | IBPB is an explicitly permitted combination, but will be rejected
>>> by this logic.
>>
>> Ah yes, I see that now:
>>
>> If software writes PRED_CMD with both bits 0 and 7 set to 1, the
>> processor performs an IBPB operation.
>
> The KVM code being a bit funky isn't doing you any favors. This is the least
> awful approach I could come up with:
>
> case MSR_IA32_PRED_CMD: {
> u64 reserved_bits = ~(PRED_CMD_IBPB | PRED_CMD_SBPB);
>
> if (!msr_info->host_initiated) {
> if (!guest_has_pred_cmd_msr(vcpu))
> return 1;
>
> if (!guest_cpuid_has(vcpu, X86_FEATURE_SBPB))
> reserved_bits |= PRED_CMD_SBPB;
> }
>
> if (!boot_cpu_has(X86_FEATURE_IBPB))
> reserved_bits |= PRED_CMD_IBPB;
>
> if (!boot_cpu_has(X86_FEATURE_SBPB))
> reserved_bits |= PRED_CMD_SBPB;
>
> if (!data)
> break;
>
> wrmsrl(MSR_IA32_PRED_CMD, data);
> break;
> }
>
> There are more wrinkles though. KVM passes through MSR_IA32_PRED_CMD based on
> IBPB. If hardware supports both IBPB and SBPB, but userspace does NOT exposes
> SBPB to the guest, then KVM will create a virtualization hole where the guest can
> write SBPB against userspace's wishes. I haven't read up on SBPB enought o know
> whether or not that's problematic.
>
> And conversely, if userspace expoes SBPB but not IBPB, then KVM will intercept
> writes to MSR_IA32_PRED_CMD and probably tank guest performance. Again, I haven't
> paid attention enough to know if this is a reasonable configuration, i.e. whether
> or not it's worth caring about in KVM.
>
> If the virtualization holes are deemed safe, then the easiest thing would be to
> treat MSR_IA32_PRED_CMD as existing if either IBPB or SBPB exists. E.g.
>
> diff --git a/arch/x86/kvm/cpuid.h b/arch/x86/kvm/cpuid.h
> index b1658c0de847..e4db844a58fe 100644
> --- a/arch/x86/kvm/cpuid.h
> +++ b/arch/x86/kvm/cpuid.h
> @@ -174,7 +174,8 @@ static inline bool guest_has_spec_ctrl_msr(struct kvm_vcpu *vcpu)
> static inline bool guest_has_pred_cmd_msr(struct kvm_vcpu *vcpu)
> {
> return (guest_cpuid_has(vcpu, X86_FEATURE_SPEC_CTRL) ||
> - guest_cpuid_has(vcpu, X86_FEATURE_AMD_IBPB));
> + guest_cpuid_has(vcpu, X86_FEATURE_AMD_IBPB) ||
> + guest_cpuid_has(vcpu, X86_FEATURE_SBPB));
> }
>
> static inline bool supports_cpuid_fault(struct kvm_vcpu *vcpu)
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index 12688754c556..aa4620fb43f8 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -3656,17 +3656,33 @@ int kvm_set_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
> vcpu->arch.perf_capabilities = data;
> kvm_pmu_refresh(vcpu);
> break;
> - case MSR_IA32_PRED_CMD:
> - if (!msr_info->host_initiated && !guest_has_pred_cmd_msr(vcpu))
> - return 1;
> + case MSR_IA32_PRED_CMD: {
> + u64 reserved_bits = ~(PRED_CMD_IBPB | PRED_CMD_SBPB);
> +
> + if (!msr_info->host_initiated) {
> + if (!guest_has_pred_cmd_msr(vcpu))
> + return 1;
> +
> + if (!guest_cpuid_has(vcpu, X86_FEATURE_SPEC_CTRL) &&
> + !guest_cpuid_has(vcpu, X86_FEATURE_AMD_IBPB))
> + reserved_bits |= PRED_CMD_IBPB;
> +
> + if (!guest_cpuid_has(vcpu, X86_FEATURE_SBPB))
> + reserved_bits |= PRED_CMD_SBPB;
> + }
> +
> + if (!boot_cpu_has(X86_FEATURE_IBPB))
> + reserved_bits |= PRED_CMD_IBPB;
> +
> + if (!boot_cpu_has(X86_FEATURE_SBPB))
> + reserved_bits |= PRED_CMD_SBPB;
>
> - if (!boot_cpu_has(X86_FEATURE_IBPB) || (data & ~PRED_CMD_IBPB))
> - return 1;
Surely data must be sanitized against reserved_bit before this if is
executed?
> if (!data)
> break;
>
> - wrmsrl(MSR_IA32_PRED_CMD, PRED_CMD_IBPB);
> + wrmsrl(MSR_IA32_PRED_CMD, data);
> break;
> + }
> case MSR_IA32_FLUSH_CMD:
> if (!msr_info->host_initiated &&
> !guest_cpuid_has(vcpu, X86_FEATURE_FLUSH_L1D))
Powered by blists - more mailing lists