[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aTe4QyE3h8LHOAMb@intel.com>
Date: Tue, 9 Dec 2025 13:48:51 +0800
From: Chao Gao <chao.gao@...el.com>
To: Sean Christopherson <seanjc@...gle.com>
CC: Thomas Gleixner <tglx@...utronix.de>, Ingo Molnar <mingo@...hat.com>,
Borislav Petkov <bp@...en8.de>, Dave Hansen <dave.hansen@...ux.intel.com>,
<x86@...nel.org>, Kiryl Shutsemau <kas@...nel.org>, Paolo Bonzini
<pbonzini@...hat.com>, <linux-kernel@...r.kernel.org>,
<linux-coco@...ts.linux.dev>, <kvm@...r.kernel.org>, Dan Williams
<dan.j.williams@...el.com>
Subject: Re: [PATCH v2 2/7] KVM: x86: Extract VMXON and EFER.SVME enablement
to kernel
>--- /dev/null
>+++ b/arch/x86/include/asm/virt.h
>@@ -0,0 +1,26 @@
>+/* SPDX-License-Identifier: GPL-2.0-only */
>+#ifndef _ASM_X86_VIRT_H
>+#define _ASM_X86_VIRT_H
>+
>+#include <asm/reboot.h>
asm/reboot.h isn't used.
>+
>+typedef void (cpu_emergency_virt_cb)(void);
>+
>+#if IS_ENABLED(CONFIG_KVM_X86)
>+extern bool virt_rebooting;
>+
>+void __init x86_virt_init(void);
>+
>+int x86_virt_get_cpu(int feat);
>+void x86_virt_put_cpu(int feat);
>+
>+int x86_virt_emergency_disable_virtualization_cpu(void);
>+
>+void x86_virt_register_emergency_callback(cpu_emergency_virt_cb *callback);
>+void x86_virt_unregister_emergency_callback(cpu_emergency_virt_cb *callback);
>+#else
>+static __always_inline void x86_virt_init(void) {}
Why does this need to be "__always_inline" rather than just "inline"?
> static void emergency_reboot_disable_virtualization(void)
> {
> local_irq_disable();
>@@ -587,16 +543,11 @@ static void emergency_reboot_disable_virtualization(void)
> * We can't take any locks and we may be on an inconsistent state, so
> * use NMIs as IPIs to tell the other CPUs to disable VMX/SVM and halt.
> *
>- * Do the NMI shootdown even if virtualization is off on _this_ CPU, as
>- * other CPUs may have virtualization enabled.
>+ * Safely force _this_ CPU out of VMX/SVM operation, and if necessary,
>+ * blast NMIs to force other CPUs out of VMX/SVM as well.k
^ stray "k".
I don't understand the "if necessary" part. My understanding is this code
issues NMIs if CPUs support VMX or SVM. If so, I think the code snippet below
would be more readable:
if (cpu_feature_enabled(X86_FEATURE_VMX) ||
cpu_feature_enabled(X86_FEATURE_SVM)) {
x86_virt_emergency_disable_virtualization_cpu();
nmi_shootdown_cpus_on_restart();
}
Then x86_virt_emergency_disable_virtualization_cpu() wouldn't need to return
anything. And readers wouldn't need to trace down the function to understand
when NMIs are "necessary" and when they are not.
> */
>- if (rcu_access_pointer(cpu_emergency_virt_callback)) {
>- /* Safely force _this_ CPU out of VMX/SVM operation. */
>- cpu_emergency_disable_virtualization();
>-
>- /* Disable VMX/SVM and halt on other CPUs. */
>+ if (!x86_virt_emergency_disable_virtualization_cpu())
> nmi_shootdown_cpus_on_restart();
>- }
> }
<snip>
>+#define x86_virt_call(fn) \
>+({ \
>+ int __r; \
>+ \
>+ if (IS_ENABLED(CONFIG_KVM_INTEL) && \
>+ cpu_feature_enabled(X86_FEATURE_VMX)) \
>+ __r = x86_vmx_##fn(); \
>+ else if (IS_ENABLED(CONFIG_KVM_AMD) && \
>+ cpu_feature_enabled(X86_FEATURE_SVM)) \
>+ __r = x86_svm_##fn(); \
>+ else \
>+ __r = -EOPNOTSUPP; \
>+ \
>+ __r; \
>+})
>+
>+int x86_virt_get_cpu(int feat)
>+{
>+ int r;
>+
>+ if (!x86_virt_feature || x86_virt_feature != feat)
>+ return -EOPNOTSUPP;
>+
>+ if (this_cpu_inc_return(virtualization_nr_users) > 1)
>+ return 0;
Should we assert that preemption is disabled? Calling this API when preemption
is enabled is wrong.
Maybe use __this_cpu_inc_return(), which already verifies preemption status.
<snip>
>+int x86_virt_emergency_disable_virtualization_cpu(void)
>+{
>+ if (!x86_virt_feature)
>+ return -EOPNOTSUPP;
>+
>+ /*
>+ * IRQs must be disabled as virtualization is enabled in hardware via
>+ * function call IPIs, i.e. IRQs need to be disabled to guarantee
>+ * virtualization stays disabled.
>+ */
>+ lockdep_assert_irqs_disabled();
>+
>+ /*
>+ * Do the NMI shootdown even if virtualization is off on _this_ CPU, as
>+ * other CPUs may have virtualization enabled.
>+ *
>+ * TODO: Track whether or not virtualization might be enabled on other
>+ * CPUs? May not be worth avoiding the NMI shootdown...
>+ */
This comment is misplaced. NMIs are issued by the caller.
>+ (void)x86_virt_call(emergency_disable_virtualization_cpu);
>+ return 0;
>+}
Powered by blists - more mailing lists