[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20231030141728.1406118-1-nik.borisov@suse.com>
Date: Mon, 30 Oct 2023 16:17:28 +0200
From: Nikolay Borisov <nik.borisov@...e.com>
To: seanjc@...gle.com
Cc: pbonzini@...hat.com, x86@...nel.org, kvm@...r.kernel.org,
linux-kernel@...r.kernel.org,
Nikolay Borisov <nik.borisov@...e.com>
Subject: [PATCH] KVM: x86: User mutex guards to eliminate __kvm_x86_vendor_init()
Current separation between (__){0,1}kvm_x86_vendor_init() is superfluos as
the the underscore version doesn't have any other callers.
Instead, use the newly added cleanup infrastructure to ensure that
kvm_x86_vendor_init() holds the vendor_module_lock throughout its
exectuion and that in case of error in the middle it's released. No
functional changes.
Signed-off-by: Nikolay Borisov <nik.borisov@...e.com>
---
arch/x86/kvm/x86.c | 15 +++------------
1 file changed, 3 insertions(+), 12 deletions(-)
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 41cce5031126..cd7c2d0f88cb 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -9446,11 +9446,13 @@ static void kvm_x86_check_cpu_compat(void *ret)
*(int *)ret = kvm_x86_check_processor_compatibility();
}
-static int __kvm_x86_vendor_init(struct kvm_x86_init_ops *ops)
+int kvm_x86_vendor_init(struct kvm_x86_init_ops *ops)
{
u64 host_pat;
int r, cpu;
+ guard(mutex)(&vendor_module_lock);
+
if (kvm_x86_ops.hardware_enable) {
pr_err("already loaded vendor module '%s'\n", kvm_x86_ops.name);
return -EEXIST;
@@ -9580,17 +9582,6 @@ static int __kvm_x86_vendor_init(struct kvm_x86_init_ops *ops)
kmem_cache_destroy(x86_emulator_cache);
return r;
}
-
-int kvm_x86_vendor_init(struct kvm_x86_init_ops *ops)
-{
- int r;
-
- mutex_lock(&vendor_module_lock);
- r = __kvm_x86_vendor_init(ops);
- mutex_unlock(&vendor_module_lock);
-
- return r;
-}
EXPORT_SYMBOL_GPL(kvm_x86_vendor_init);
void kvm_x86_vendor_exit(void)
--
2.34.1
Powered by blists - more mailing lists