lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Fri, 26 Apr 2024 16:52:00 +0800
From: Chao Gao <chao.gao@...el.com>
To: Sean Christopherson <seanjc@...gle.com>
CC: Paolo Bonzini <pbonzini@...hat.com>, <kvm@...r.kernel.org>,
	<linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 2/4] KVM: x86: Register emergency virt callback in common
 code, via kvm_x86_ops

>diff --git a/arch/x86/kvm/vmx/x86_ops.h b/arch/x86/kvm/vmx/x86_ops.h
>index 502704596c83..afddfe3747dd 100644
>--- a/arch/x86/kvm/vmx/x86_ops.h
>+++ b/arch/x86/kvm/vmx/x86_ops.h
>@@ -15,6 +15,7 @@ void vmx_hardware_unsetup(void);
> int vmx_check_processor_compat(void);
> int vmx_hardware_enable(void);
> void vmx_hardware_disable(void);
>+void vmx_emergency_disable(void);
> int vmx_vm_init(struct kvm *kvm);
> void vmx_vm_destroy(struct kvm *kvm);
> int vmx_vcpu_precreate(struct kvm *kvm);
>diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
>index e9ef1fa4b90b..12e88aa2cca2 100644
>--- a/arch/x86/kvm/x86.c
>+++ b/arch/x86/kvm/x86.c
>@@ -9797,6 +9797,8 @@ int kvm_x86_vendor_init(struct kvm_x86_init_ops *ops)
> 
> 	kvm_ops_update(ops);
> 
>+	cpu_emergency_register_virt_callback(kvm_x86_ops.emergency_disable);
>+

vmx_emergency_disable() accesses loaded_vmcss_on_cpu but now it may be called
before loaded_vmcss_on_cpu is initialized. This may be not a problem for now
given the check for X86_CR4_VMXE  in vmx_emergency_disable(). But relying on
that check is fragile. I think it is better to apply the patch below from Isaku
before this patch.

https://lore.kernel.org/kvm/c1b7f0e5c2476f9f565acda5c1e746b8d181499b.1708933498.git.isaku.yamahata@intel.com/

> 	for_each_online_cpu(cpu) {
> 		smp_call_function_single(cpu, kvm_x86_check_cpu_compat, &r, 1);
> 		if (r < 0)
>@@ -9847,6 +9849,7 @@ int kvm_x86_vendor_init(struct kvm_x86_init_ops *ops)
> 	return 0;
> 
> out_unwind_ops:
>+	cpu_emergency_unregister_virt_callback(kvm_x86_ops.emergency_disable);
> 	kvm_x86_ops.hardware_enable = NULL;
> 	static_call(kvm_x86_hardware_unsetup)();
> out_mmu_exit:
>@@ -9887,6 +9890,8 @@ void kvm_x86_vendor_exit(void)
> 	static_key_deferred_flush(&kvm_xen_enabled);
> 	WARN_ON(static_branch_unlikely(&kvm_xen_enabled.key));
> #endif
>+	cpu_emergency_unregister_virt_callback(kvm_x86_ops.emergency_disable);
>+
> 	mutex_lock(&vendor_module_lock);
> 	kvm_x86_ops.hardware_enable = NULL;
> 	mutex_unlock(&vendor_module_lock);
>-- 
>2.44.0.769.g3c40516874-goog
>
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ