lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 3 Nov 2023 16:18:25 +0800
From:   "Yang, Weijiang" <weijiang.yang@...el.com>
To:     Sean Christopherson <seanjc@...gle.com>
CC:     <pbonzini@...hat.com>, <kvm@...r.kernel.org>,
        <linux-kernel@...r.kernel.org>, <dave.hansen@...el.com>,
        <peterz@...radead.org>, <chao.gao@...el.com>,
        <rick.p.edgecombe@...el.com>, <john.allen@....com>,
        Maxim Levitsky <mlevitsk@...hat.com>
Subject: Re: [PATCH v6 19/25] KVM: VMX: Emulate read and write to CET MSRs

On 11/2/2023 12:31 AM, Sean Christopherson wrote:
> On Tue, Oct 31, 2023, Maxim Levitsky wrote:
>> On Thu, 2023-09-14 at 02:33 -0400, Yang Weijiang wrote:
>>> Add emulation interface for CET MSR access. The emulation code is split
>>> into common part and vendor specific part. The former does common check
>>> for MSRs and reads/writes directly from/to XSAVE-managed MSRs via the
>>> helpers while the latter accesses the MSRs linked to VMCS fields.
>>>
>>> Signed-off-by: Yang Weijiang <weijiang.yang@...el.com>
>>> ---
> ...
>
>>> +	case MSR_IA32_PL0_SSP ... MSR_IA32_PL3_SSP:
>>> +	case MSR_KVM_SSP:
>>> +		if (host_msr_reset && kvm_cpu_cap_has(X86_FEATURE_SHSTK))
>>> +			break;
>>> +		if (!guest_can_use(vcpu, X86_FEATURE_SHSTK))
>>> +			return 1;
>>> +		if (index == MSR_KVM_SSP && !host_initiated)
>>> +			return 1;
>>> +		if (is_noncanonical_address(data, vcpu))
>>> +			return 1;
>>> +		if (index != MSR_IA32_INT_SSP_TAB && !IS_ALIGNED(data, 4))
>>> +			return 1;
>>> +		break;
>> Once again I'll prefer to have an ioctl for setting/getting SSP, this will
>> make the above code simpler (e.g there will be no need to check that write
>> comes from the host/etc).
> I don't think an ioctl() would be simpler overall, especially when factoring in
> userspace.  With a synthetic MSR, we get the following quite cheaply:
>
>   1. Enumerating support to userspace.
>   2. Save/restore of the value, e.g. for live migration.
>   3. Vendor hooks for propagating values to/from the VMCS/VMCB.
>
> For an ioctl(), #1 would require a capability, #2 (and #1 to some extent) would
> require new userspace flows, and #3 would require new kvm_x86_ops hooks.
>
> The synthetic MSR adds a small amount of messiness, as does bundling
> MSR_IA32_INT_SSP_TAB with the other shadow stack MSRs.  The bulk of the mess comes
> from the need to allow userspace to write '0' when KVM enumerated supported to
> userspace.
>
> If we isolate MSR_IA32_INT_SSP_TAB, that'll help with the synthetic MSR and with
> MSR_IA32_INT_SSP_TAB.  For the unfortunate "host reset" behavior, the best idea I
> came up with is to add a helper.  It's still a bit ugly, but the ugliness is
> contained in a helper and IMO makes it much easier to follow the case statements.

Frankly speaking, existing code is not hard to understand to me :-), the handling for MSR_KVM_SSP
and MSR_IA32_INT_SSP_TAB is straightforward if audiences read the related spec.
But I'll take your advice and enclose below changes. Thanks!
> get:
>
> 	case MSR_IA32_INT_SSP_TAB:
> 		if (!guest_can_use(vcpu, X86_FEATURE_SHSTK) ||
> 		    !guest_cpuid_has(vcpu, X86_FEATURE_LM))
> 			return 1;
> 		break;
> 	case MSR_KVM_SSP:
> 		if (!host_initiated)
> 			return 1;
> 		fallthrough;
> 	case MSR_IA32_PL0_SSP ... MSR_IA32_PL3_SSP:
> 		if (!guest_can_use(vcpu, X86_FEATURE_SHSTK))
> 			return 1;
> 		break;
>
> static bool is_set_cet_msr_allowed(struct kvm_vcpu *vcpu, u32 index, u64 data,
> 				   bool host_initiated)
> {
> 	bool any_cet = index == MSR_IA32_S_CET || index == MSR_IA32_U_CET;
>
> 	if (guest_can_use(vcpu, X86_FEATURE_SHSTK))
> 		return true;
>
> 	if (any_cet && guest_can_use(vcpu, X86_FEATURE_IBT))
> 		return true;
>
> 	/*
> 	 * If KVM supports the MSR, i.e. has enumerated the MSR existence to
> 	 * userspace, then userspace is allowed to write '0' irrespective of
> 	 * whether or not the MSR is exposed to the guest.
> 	 */
> 	if (!host_initiated || data)
> 		return false;
> 	if (kvm_cpu_cap_has(X86_FEATURE_SHSTK))
> 		return true;
>
> 	return any_cet && kvm_cpu_cap_has(X86_FEATURE_IBT);
> }
>
> set:
> 	case MSR_IA32_U_CET:
> 	case MSR_IA32_S_CET:
> 		if (!is_set_cet_msr_allowed(vcpu, index, data, host_initiated))
> 			return 1;
> 		if (data & CET_US_RESERVED_BITS)
> 			return 1;
> 		if (!guest_can_use(vcpu, X86_FEATURE_SHSTK) &&
> 		    (data & CET_US_SHSTK_MASK_BITS))
> 			return 1;
> 		if (!guest_can_use(vcpu, X86_FEATURE_IBT) &&
> 		    (data & CET_US_IBT_MASK_BITS))
> 			return 1;
> 		if (!IS_ALIGNED(CET_US_LEGACY_BITMAP_BASE(data), 4))
> 			return 1;
>
> 		/* IBT can be suppressed iff the TRACKER isn't WAIT_ENDBR. */
> 		if ((data & CET_SUPPRESS) && (data & CET_WAIT_ENDBR))
> 			return 1;
> 		break;
> 	case MSR_IA32_INT_SSP_TAB:
> 		if (!guest_cpuid_has(vcpu, X86_FEATURE_LM))
> 			return 1;
>
> 		if (is_noncanonical_address(data, vcpu))
> 			return 1;
> 		break;
> 	case MSR_KVM_SSP:
> 		if (!host_initiated)
> 			return 1;
> 		fallthrough;
> 	case MSR_IA32_PL0_SSP ... MSR_IA32_PL3_SSP:
> 		if (!is_set_cet_msr_allowed(vcpu, index, data, host_initiated))
> 			return 1;
> 		if (is_noncanonical_address(data, vcpu))
> 			return 1;
> 		if (!IS_ALIGNED(data, 4))
> 			return 1;
> 		break;
> 	}

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ