[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aHpWW0ZPuI5thDqZ@google.com>
Date: Fri, 18 Jul 2025 07:12:43 -0700
From: Sean Christopherson <seanjc@...gle.com>
To: lirongqing <lirongqing@...du.com>
Cc: pbonzini@...hat.com, vkuznets@...hat.com, tglx@...utronix.de,
mingo@...hat.com, bp@...en8.de, dave.hansen@...ux.intel.com, x86@...nel.org,
hpa@...or.com, kvm@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] x86/kvm: Reorder PV spinlock checks for dedicated CPU case
On Fri, Jul 18, 2025, lirongqing wrote:
> From: Li RongQing <lirongqing@...du.com>
>
> When a vCPU has a dedicated physical CPU, typically, the hypervisor
> disables the HLT exit too,
But certainly not always. E.g. the hypervisor may disable MWAIT exiting but not
HLT exiting, so that the hypervisor can take action if a guest kernel refuses to
use MWAIT for whatever reason.
I assume native qspinlocks outperform virt_spin_lock() irrespective of HLT exiting
when the vCPU has a dedicated pCPU? If so, it's probably worth calling that out
in the changelog, e.g. to assuage any fears/concerns about this being undesirable
for setups with KVM_HINTS_REALTIME *and* KVM_FEATURE_PV_UNHALT.
> rendering the KVM_FEATURE_PV_UNHALT feature unavailable, and
> virt_spin_lock_key is expected to be disabled in this configuration, but:
>
> The problematic execution flow caused the enabled virt_spin_lock_key:
> - First check PV_UNHALT
> - Then check dedicated CPUs
>
> So change the order:
> - First check dedicated CPUs
> - Then check PV_UNHALT
>
> This ensures virt_spin_lock_key is disable when dedicated physical
> CPUs are available and HLT exit is disabled, and this will gives a
> pretty performance boost at high contention level
>
> Signed-off-by: Li RongQing <lirongqing@...du.com>
> ---
> arch/x86/kernel/kvm.c | 20 ++++++++++----------
> 1 file changed, 10 insertions(+), 10 deletions(-)
>
> diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
> index 921c1c7..9cda79f 100644
> --- a/arch/x86/kernel/kvm.c
> +++ b/arch/x86/kernel/kvm.c
> @@ -1073,16 +1073,6 @@ static void kvm_wait(u8 *ptr, u8 val)
> void __init kvm_spinlock_init(void)
> {
> /*
> - * In case host doesn't support KVM_FEATURE_PV_UNHALT there is still an
> - * advantage of keeping virt_spin_lock_key enabled: virt_spin_lock() is
> - * preferred over native qspinlock when vCPU is preempted.
> - */
> - if (!kvm_para_has_feature(KVM_FEATURE_PV_UNHALT)) {
> - pr_info("PV spinlocks disabled, no host support\n");
> - return;
> - }
> -
> - /*
> * Disable PV spinlocks and use native qspinlock when dedicated pCPUs
> * are available.
> */
> @@ -1101,6 +1091,16 @@ void __init kvm_spinlock_init(void)
> goto out;
> }
>
> + /*
> + * In case host doesn't support KVM_FEATURE_PV_UNHALT there is still an
> + * advantage of keeping virt_spin_lock_key enabled: virt_spin_lock() is
> + * preferred over native qspinlock when vCPU is preempted.
> + */
> + if (!kvm_para_has_feature(KVM_FEATURE_PV_UNHALT)) {
> + pr_info("PV spinlocks disabled, no host support\n");
> + return;
> + }
> +
> pr_info("PV spinlocks enabled\n");
>
> __pv_init_lock_hash();
> --
> 2.9.4
>
Powered by blists - more mailing lists