[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ff6b7e9d90d80feb9dcabb0fbd3808c04db3ff94.camel@redhat.com>
Date: Thu, 02 Nov 2023 20:38:12 +0200
From: Maxim Levitsky <mlevitsk@...hat.com>
To: Sean Christopherson <seanjc@...gle.com>
Cc: Yang Weijiang <weijiang.yang@...el.com>, pbonzini@...hat.com,
kvm@...r.kernel.org, linux-kernel@...r.kernel.org,
dave.hansen@...el.com, peterz@...radead.org, chao.gao@...el.com,
rick.p.edgecombe@...el.com, john.allen@....com
Subject: Re: [PATCH v6 19/25] KVM: VMX: Emulate read and write to CET MSRs
On Wed, 2023-11-01 at 09:31 -0700, Sean Christopherson wrote:
> On Tue, Oct 31, 2023, Maxim Levitsky wrote:
> > On Thu, 2023-09-14 at 02:33 -0400, Yang Weijiang wrote:
> > > Add emulation interface for CET MSR access. The emulation code is split
> > > into common part and vendor specific part. The former does common check
> > > for MSRs and reads/writes directly from/to XSAVE-managed MSRs via the
> > > helpers while the latter accesses the MSRs linked to VMCS fields.
> > >
> > > Signed-off-by: Yang Weijiang <weijiang.yang@...el.com>
> > > ---
>
> ...
>
> > > + case MSR_IA32_PL0_SSP ... MSR_IA32_PL3_SSP:
> > > + case MSR_KVM_SSP:
> > > + if (host_msr_reset && kvm_cpu_cap_has(X86_FEATURE_SHSTK))
> > > + break;
> > > + if (!guest_can_use(vcpu, X86_FEATURE_SHSTK))
> > > + return 1;
> > > + if (index == MSR_KVM_SSP && !host_initiated)
> > > + return 1;
> > > + if (is_noncanonical_address(data, vcpu))
> > > + return 1;
> > > + if (index != MSR_IA32_INT_SSP_TAB && !IS_ALIGNED(data, 4))
> > > + return 1;
> > > + break;
> > Once again I'll prefer to have an ioctl for setting/getting SSP, this will
> > make the above code simpler (e.g there will be no need to check that write
> > comes from the host/etc).
>
> I don't think an ioctl() would be simpler overall, especially when factoring in
> userspace. With a synthetic MSR, we get the following quite cheaply:
>
> 1. Enumerating support to userspace.
> 2. Save/restore of the value, e.g. for live migration.
> 3. Vendor hooks for propagating values to/from the VMCS/VMCB.
>
> For an ioctl(),
> #1 would require a capability, #2 (and #1 to some extent) would
> require new userspace flows, and #3 would require new kvm_x86_ops hooks.
>
> The synthetic MSR adds a small amount of messiness, as does bundling
> MSR_IA32_INT_SSP_TAB with the other shadow stack MSRs. The bulk of the mess comes
> from the need to allow userspace to write '0' when KVM enumerated supported to
> userspace.
Let me put it this way - all hacks start like that, and in this case this is API/ABI hack
so we will have to live with it forever.
Once there is a precedent, trust me there will be 10s of new 'fake' msrs added, and the
interface will become one big mess.
As I suggested, if you don't want to add new capability/ioctl and vendor callback per new
x86 arch register, then let's implement KVM_GET_ONE_REG/KVM_SET_ONE_REG and then it will
be really easy to add new regs without confusing users, and without polluting msr
namespace with msrs that don't exist.
Best regards,
Maxim Levitsky
>
> If we isolate MSR_IA32_INT_SSP_TAB, that'll help with the synthetic MSR and with
> MSR_IA32_INT_SSP_TAB. For the unfortunate "host reset" behavior, the best idea I
> came up with is to add a helper. It's still a bit ugly, but the ugliness is
> contained in a helper and IMO makes it much easier to follow the case statements.
>
> get:
>
> case MSR_IA32_INT_SSP_TAB:
> if (!guest_can_use(vcpu, X86_FEATURE_SHSTK) ||
> !guest_cpuid_has(vcpu, X86_FEATURE_LM))
> return 1;
> break;
> case MSR_KVM_SSP:
> if (!host_initiated)
> return 1;
> fallthrough;
> case MSR_IA32_PL0_SSP ... MSR_IA32_PL3_SSP:
> if (!guest_can_use(vcpu, X86_FEATURE_SHSTK))
> return 1;
> break;
>
> static bool is_set_cet_msr_allowed(struct kvm_vcpu *vcpu, u32 index, u64 data,
> bool host_initiated)
> {
> bool any_cet = index == MSR_IA32_S_CET || index == MSR_IA32_U_CET;
>
> if (guest_can_use(vcpu, X86_FEATURE_SHSTK))
> return true;
>
> if (any_cet && guest_can_use(vcpu, X86_FEATURE_IBT))
> return true;
>
> /*
> * If KVM supports the MSR, i.e. has enumerated the MSR existence to
> * userspace, then userspace is allowed to write '0' irrespective of
> * whether or not the MSR is exposed to the guest.
> */
> if (!host_initiated || data)
> return false;
>
> if (kvm_cpu_cap_has(X86_FEATURE_SHSTK))
> return true;
>
> return any_cet && kvm_cpu_cap_has(X86_FEATURE_IBT);
> }
>
> set:
> case MSR_IA32_U_CET:
> case MSR_IA32_S_CET:
> if (!is_set_cet_msr_allowed(vcpu, index, data, host_initiated))
> return 1;
> if (data & CET_US_RESERVED_BITS)
> return 1;
> if (!guest_can_use(vcpu, X86_FEATURE_SHSTK) &&
> (data & CET_US_SHSTK_MASK_BITS))
> return 1;
> if (!guest_can_use(vcpu, X86_FEATURE_IBT) &&
> (data & CET_US_IBT_MASK_BITS))
> return 1;
> if (!IS_ALIGNED(CET_US_LEGACY_BITMAP_BASE(data), 4))
> return 1;
>
> /* IBT can be suppressed iff the TRACKER isn't WAIT_ENDBR. */
> if ((data & CET_SUPPRESS) && (data & CET_WAIT_ENDBR))
> return 1;
> break;
> case MSR_IA32_INT_SSP_TAB:
> if (!guest_cpuid_has(vcpu, X86_FEATURE_LM))
> return 1;
>
> if (is_noncanonical_address(data, vcpu))
> return 1;
> break;
> case MSR_KVM_SSP:
> if (!host_initiated)
> return 1;
> fallthrough;
> case MSR_IA32_PL0_SSP ... MSR_IA32_PL3_SSP:
> if (!is_set_cet_msr_allowed(vcpu, index, data, host_initiated))
> return 1;
> if (is_noncanonical_address(data, vcpu))
> return 1;
> if (!IS_ALIGNED(data, 4))
> return 1;
> break;
> }
>
Powered by blists - more mailing lists