lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Yu1FZX7hwT1X7DSw@google.com>
Date:   Fri, 5 Aug 2022 16:29:25 +0000
From:   Sean Christopherson <seanjc@...gle.com>
To:     Like Xu <like.xu.linux@...il.com>
Cc:     Paolo Bonzini <pbonzini@...hat.com>, kvm@...r.kernel.org,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH 2/3] KVM: x86: Reject writes to PERF_CAPABILITIES feature
 MSR after KVM_RUN

On Fri, Aug 05, 2022, Like Xu wrote:
> From: Like Xu <likexu@...cent.com>
> 
> KVM may do the "wrong" thing if userspace changes PERF_CAPABILITIES
> after running the vCPU, i.e. after KVM_RUN. Similar to disallowing CPUID
> changes after KVM_RUN, KVM should also disallow changing the feature MSRs
> (conservatively starting from PERF_CAPABILITIES) after KVM_RUN to prevent
> unexpected behavior.
> 
> Applying the same logic to most feature msrs in do_set_msr() may
> reduce the flexibility (one odd but reasonable user space may want
> per-vcpu control, feature by feature)

I don't follow this argument.  vcpu->arch.last_vmentry_cpu is per-vCPU, nothing
prevents userspace from defining 

> and also increases the overhead.

Yes, there is a small amount of overhead, but it's trivial compared to the massive
switch statements in {svm,vmx}_set_msr().  And there are multiple ways to make the
overhead practically meaningless.

Ha!  Modern compilers are fantastic.  If we special case the VMX MSRs, which isn't
a terrible idea anyways (it's a good excuse to add proper defines instead of open
coding "MSR_IA32_VMX_BASIC ... MSR_IA32_VMX_VMFUNC" everywhere), then the lookup
can use a compile-time constant array and generate what is effectively a switch
statement without actually needed a switch statemen, and without needed to play
macro games are have redundant lists (maintenance burden).

E.g. with some prep work, the lookup can be this

/*
 * All feature MSRs except uCode revID, which tracks the currently loaded uCode
 * patch, are immutable once the vCPU model is defined.  Special case the VMX
 * MSRs to keep the primarly lookup loop short.
 */
static bool kvm_is_immutable_feature_msr(u32 msr)
{
	int i;

	if (msr >= KVM_FIRST_VMX_FEATURE_MSR && msr <= KVM_LAST_VMX_FEATURE_MSR)
		return true;

	for (i = 0; i < ARRAY_SIZE(msr_based_features_all_except_vmx); i++) {
		if (msr == msr_based_features_all_except_vmx[i])
			return msr != MSR_IA32_UCODE_REV;
	}

	return false;
}

which gcc translates to a series of CMP+JCC/SETcc instructions.

   0x00000000000291d0 <+48>:	8d 86 80 fb ff ff	lea    -0x480(%rsi),%eax
   0x00000000000291d6 <+54>:	83 f8 11	cmp    $0x11,%eax
   0x00000000000291d9 <+57>:	76 65	jbe    0x29240 <do_set_msr+160>
   0x00000000000291db <+59>:	81 fe 29 10 01 c0	cmp    $0xc0011029,%esi
   0x00000000000291e1 <+65>:	74 5d	je     0x29240 <do_set_msr+160>
   0x00000000000291e3 <+67>:	81 fe 8b 00 00 00	cmp    $0x8b,%esi
   0x00000000000291e9 <+73>:	0f 94 c0	sete   %al
   0x00000000000291ec <+76>:	81 fe 0a 01 00 00	cmp    $0x10a,%esi
   0x00000000000291f2 <+82>:	0f 94 c2	sete   %dl
   0x00000000000291f5 <+85>:	08 d0	or     %dl,%al
   0x00000000000291f7 <+87>:	75 3e	jne    0x29237 <do_set_msr+151>
   0x00000000000291f9 <+89>:	81 fe 45 03 00 00	cmp    $0x345,%esi
   0x00000000000291ff <+95>:	74 36	je     0x29237 <do_set_msr+151>

I'll send a series.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ