[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <ZnL9z/wr+x67G14s@chenyu5-mobl2>
Date: Wed, 19 Jun 2024 23:48:31 +0800
From: Chen Yu <yu.c.chen@...el.com>
To: Nikolay Borisov <nik.borisov@...e.com>
CC: Dave Hansen <dave.hansen@...ux.intel.com>, Juergen Gross
<jgross@...e.com>, Thomas Gleixner <tglx@...utronix.de>, Ingo Molnar
<mingo@...hat.com>, Borislav Petkov <bp@...en8.de>, Ajay Kaher
<ajay.kaher@...adcom.com>, <x86@...nel.org>, "H. Peter Anvin"
<hpa@...or.com>, Arnd Bergmann <arnd@...db.de>,
<virtualization@...ts.linux.dev>, <linux-kernel@...r.kernel.org>, Qiuxu Zhuo
<qiuxu.zhuo@...el.com>, Prem Nath Dey <prem.nath.dey@...el.com>, "Xiaoping
Zhou" <xiaoping.zhou@...el.com>
Subject: Re: [PATCH v2] x86/paravirt: Disable virt spinlock on bare metal
On 2024-06-19 at 18:34:34 +0300, Nikolay Borisov wrote:
>
>
> On 19.06.24 г. 18:25 ч., Chen Yu wrote:
> > Hi Nikolay,
> >
> > On 2024-06-18 at 11:24:42 +0300, Nikolay Borisov wrote:
> > >
> > >
> > > On 26.05.24 г. 4:58 ч., Chen Yu wrote:
> > > > The kernel can change spinlock behavior when running as a guest. But
> > > > this guest-friendly behavior causes performance problems on bare metal.
> > > > So there's a 'virt_spin_lock_key' static key to switch between the two
> > > > modes.
> > > >
> > > > The static key is always enabled by default (run in guest mode) and
> > > > should be disabled for bare metal (and in some guests that want native
> > > > behavior).
> > > >
> > > > Performance drop is reported when running encode/decode workload and
> > > > BenchSEE cache sub-workload.
> > > > Bisect points to commit ce0a1b608bfc ("x86/paravirt: Silence unused
> > > > native_pv_lock_init() function warning"). When CONFIG_PARAVIRT_SPINLOCKS
> > > > is disabled the virt_spin_lock_key is incorrectly set to true on bare
> > > > metal. The qspinlock degenerates to test-and-set spinlock, which
> > > > decrease the performance on bare metal.
> > > >
> > > > Fix this by disabling virt_spin_lock_key if it is on bare metal,
> > > > regardless of CONFIG_PARAVIRT_SPINLOCKS.
> > > >
> > >
> > > nit:
> > >
> > > This bug wouldn't have happened if the key was defined FALSE by default and
> > > only enabled in the appropriate case. I think it makes more sense to invert
> > > the logic and have the key FALSE by default and only enable it iff the
> > > kernel is running under a hypervisor... At worst only the virtualization
> > > case would suffer if the lock is falsely not enabled.
> >
> > Thank you for your review. I agree, initializing the key to FALSE by default seems
> > to be more readable. Could this change be the subsequent adjustment based
> > on current fix, which could be more bisectible?
>
> Why can't this change be squashed in the current proposed patch?
>
Current patch deals with the incorrect condition of CONFIG_PARAVIRT_SPINLOCKS.
The change of the virt_spin_lock_key's default value is supposed to be "no functional
change expected", but maybe just in case of some corner cases...
Anyway, I'll put the changes together in one patch and do some tests.
> >
> >
> > Set the default key to false. If booting in a VM, enable this key. Later during
> > the VM initialization, if other high-efficient spinlock is preferred, like
> > paravirt-spinlock, the virt_spin_lock_key will be disabled accordingly.
>
> Yep, or simply during the initialization stage the correct flavor will be
> chosen, no need to do the on-off dance. But that's a topic for a different
> discussion.
>
Yes it is doable.
thanks,
Chenyu
> >
> > diff --git a/arch/x86/include/asm/qspinlock.h b/arch/x86/include/asm/qspinlock.h
> > index cde8357bb226..a7d3ba00e70e 100644
> > --- a/arch/x86/include/asm/qspinlock.h
> > +++ b/arch/x86/include/asm/qspinlock.h
> > @@ -66,13 +66,13 @@ static inline bool vcpu_is_preempted(long cpu)
> > #ifdef CONFIG_PARAVIRT
> > /*
> > - * virt_spin_lock_key - enables (by default) the virt_spin_lock() hijack.
> > + * virt_spin_lock_key - disables (by default) the virt_spin_lock() hijack.
> > *
> > * Native (and PV wanting native due to vCPU pinning) should disable this key.
> > * It is done in this backwards fashion to only have a single direction change,
> > * which removes ordering between native_pv_spin_init() and HV setup.
> > */
> > -DECLARE_STATIC_KEY_TRUE(virt_spin_lock_key);
> > +DECLARE_STATIC_KEY_FALSE(virt_spin_lock_key);
> > /*
> > * Shortcut for the queued_spin_lock_slowpath() function that allows
> > diff --git a/arch/x86/kernel/paravirt.c b/arch/x86/kernel/paravirt.c
> > index c193c9e60a1b..fec381533555 100644
> > --- a/arch/x86/kernel/paravirt.c
> > +++ b/arch/x86/kernel/paravirt.c
> > @@ -51,12 +51,12 @@ DEFINE_ASM_FUNC(pv_native_irq_enable, "sti", .noinstr.text);
> > DEFINE_ASM_FUNC(pv_native_read_cr2, "mov %cr2, %rax", .noinstr.text);
> > #endif
> > -DEFINE_STATIC_KEY_TRUE(virt_spin_lock_key);
> > +DEFINE_STATIC_KEY_FALSE(virt_spin_lock_key);
> > void __init native_pv_lock_init(void)
> > {
> > - if (!boot_cpu_has(X86_FEATURE_HYPERVISOR))
> > - static_branch_disable(&virt_spin_lock_key);
> > + if (boot_cpu_has(X86_FEATURE_HYPERVISOR))
> > + static_branch_enable(&virt_spin_lock_key);
> > }
> > static void native_tlb_remove_table(struct mmu_gather *tlb, void *table)
Powered by blists - more mailing lists