[<prev] [next>] [day] [month] [year] [list]
Message-ID: <MEYP282MB4026F6DB1A248E9EE7E8BB99C3389@MEYP282MB4026.AUSP282.PROD.OUTLOOK.COM>
Date: Thu, 3 Nov 2022 20:10:06 +0800
From: johnnyaiai <arafatms@...look.com>
To: jgross@...e.com
Cc: tglx@...utronix.de, mingo@...hat.com, bp@...en8.de,
dave.hansen@...ux.intel.com, hpa@...or.com,
linux-kernel@...r.kernel.org,
virtualization@...ts.linux-foundation.org,
johnnyaiai <johnnyaiai@...cent.com>
Subject: [PATCH] locking/paravirt: Fix performance regression on core bonded vCPU
From: johnnyaiai <johnnyaiai@...cent.com>
virt_spin_lock() is preferred over native qspinlock when
vCPU is preempted. But brings a lot of regression while
vCPU is not preempted. Provide a early param 'novirtlock'
to choose would be better.
will-it-scale/lock2_threads -s 10 -t 64
baseline afterpatch
559938 2166135
Signed-off-by: johnnyaiai <johnnyaiai@...cent.com>
---
kernel/locking/qspinlock.c | 3 ---
1 file changed, 3 deletions(-)
diff --git a/arch/x86/kernel/paravirt.c b/arch/x86/kernel/paravirt.c
index 59d3d2763..529cf23fe 100644
--- a/arch/x86/kernel/paravirt.c
+++ b/arch/x86/kernel/paravirt.c
@@ -107,6 +107,13 @@ static unsigned paravirt_patch_jmp(void *insn_buff, const void *target,
DEFINE_STATIC_KEY_TRUE(virt_spin_lock_key);
+static __init int parse_novirtspin(char *arg)
+{
+ static_branch_disable(&virt_spin_lock_key);
+ return 0;
+}
+early_param("novirtspin", parse_novirtspin);
+
void __init native_pv_lock_init(void)
{
if (!boot_cpu_has(X86_FEATURE_HYPERVISOR))
--
2.27.0
Powered by blists - more mailing lists