lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <90837ad5-c9a6-42da-a5a8-fcd2d870dac8@intel.com>
Date: Fri, 19 Dec 2025 09:45:09 -0800
From: Dave Hansen <dave.hansen@...el.com>
To: Sean Christopherson <seanjc@...gle.com>,
 Thomas Gleixner <tglx@...utronix.de>, Ingo Molnar <mingo@...hat.com>,
 Borislav Petkov <bp@...en8.de>, Dave Hansen <dave.hansen@...ux.intel.com>,
 x86@...nel.org, Kiryl Shutsemau <kas@...nel.org>,
 Paolo Bonzini <pbonzini@...hat.com>
Cc: linux-kernel@...r.kernel.org, linux-coco@...ts.linux.dev,
 kvm@...r.kernel.org, Chao Gao <chao.gao@...el.com>,
 Dan Williams <dan.j.williams@...el.com>
Subject: Re: [PATCH v2 2/7] KVM: x86: Extract VMXON and EFER.SVME enablement
 to kernel

On 12/5/25 17:10, Sean Christopherson wrote:
> Move the innermost VMXON and EFER.SVME management logic out of KVM and
> into to core x86 so that TDX can force VMXON without having to rely on
> KVM being loaded, e.g. to do SEAMCALLs during initialization.
> 
> Implement a per-CPU refcounting scheme so that "users", e.g. KVM and the
> future TDX code, can co-exist without pulling the rug out from under each
> other.
> 
> To avoid having to choose between SVM and VMX, simply refuse to enable
> either if both are somehow supported.  No known CPU supports both SVM and
> VMX, and it's comically unlikely such a CPU will ever exist.

Yeah, that's totally a "please share your drugs" moment, if it happens.

> +static int x86_vmx_get_cpu(void)
> +{
> +	int r;
> +
> +	if (cr4_read_shadow() & X86_CR4_VMXE)
> +		return -EBUSY;
> +
> +	intel_pt_handle_vmx(1);
> +
> +	r = x86_virt_cpu_vmxon();
> +	if (r) {
> +		intel_pt_handle_vmx(0);
> +		return r;
> +	}
> +
> +	return 0;
> +}
...> +#define x86_virt_call(fn)				\
> +({							\
> +	int __r;					\
> +							\
> +	if (IS_ENABLED(CONFIG_KVM_INTEL) &&		\
> +	    cpu_feature_enabled(X86_FEATURE_VMX))	\
> +		__r = x86_vmx_##fn();			\
> +	else if (IS_ENABLED(CONFIG_KVM_AMD) &&		\
> +		 cpu_feature_enabled(X86_FEATURE_SVM))	\
> +		__r = x86_svm_##fn();			\
> +	else						\
> +		__r = -EOPNOTSUPP;			\
> +							\
> +	__r;						\
> +})

I'm not a super big fan of this. I know you KVM folks love your macros
and wrapping function calls in them because you hate grep. ;)

I don't like a foo_get_cpu() call having such fundamentally different
semantics than good old get_cpu() itself. *Especially* when the calls
look like:

	r = x86_virt_call(get_cpu);

and get_cpu() itself it not invovled one bit. This 100% looks like it's
some kind of virt-specific call for get_cpu().

I think it's probably OK to make this get_hw_ref() or inc_hw_ref() or
something to get it away from getting confused with get_cpu().

IMNHO, the macro magic is overkill. A couple of global function pointers
would probably be fine because none of this code is even remotely
performance sensitive. A couple static_call()s would be fine too because
those at least make it blatantly obvious that the thing being called is
variable. A good ol' ops structure would also make things obvious, but
are probably also overkill-adjecent for this.

P.S. In a perfect world, the renames would also be in their own patches,
but I think I can live with it as-is.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ