[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <6ea07284-7bc2-ad73-21ab-78eb75a38751@loongson.cn>
Date: Sat, 5 Jul 2025 14:39:34 +0800
From: Bibo Mao <maobibo@...ngson.cn>
To: Liangyan <liangyan.peng@...edance.com>, pbonzini@...hat.com,
vkuznets@...hat.com, tglx@...utronix.de, mingo@...hat.com, bp@...en8.de,
dave.hansen@...ux.intel.com, hpa@...or.com, wanpengli@...cent.com
Cc: linux-kernel@...r.kernel.org, x86@...nel.org, kvm@...r.kernel.org
Subject: Re: [External] Re: [RFC] x86/kvm: Use native qspinlock by default
when realtime hinted
There is big improvement with the test result. spawn test case is a
little tricky, if forked child process is scheduled on the same CPU with
the parent, the benefit is very huge. I doubt it is probably caused by
scheduler rather than by spinlock itself.
1. What is cpu topology and numa information with physical machine and
virtual machine?
2. Could you show reschedule IPI interrupt stat information when running
spawn test case?
3. Could you run this case on CPU over-commit scenary, such as both two
VMs with 120 vCPUs?
Regards
Bibo Mao
On 2025/7/2 下午8:23, Liangyan wrote:
> We test that unixbench spawn has big improvement in Intel 8582c 120-CPU
> guest vm if switch to qspinlock.
>
> Command: ./Run -c 120 spawn
>
> Use virt_spin_lock:
> System Benchmarks Partial Index BASELINE RESULT INDEX
> Process Creation 126.0 71878.4 5704.6
> ========
> System Benchmarks Index Score (Partial Only) 5704.6
>
>
> Use qspinlock:
> System Benchmarks Partial Index BASELINE RESULT INDEX
> Process Creation 126.0 173566.6 13775.1
> ========
> System Benchmarks Index Score (Partial Only 13775.1
>
>
> Regards,
> Liangyan
>
> On 2025/7/2 16:19, Bibo Mao wrote:
>>
>>
>> On 2025/7/2 下午2:42, Liangyan wrote:
>>> When KVM_HINTS_REALTIME is set and KVM_FEATURE_PV_UNHALT is clear,
>>> currently guest will use virt_spin_lock.
>>> Since KVM_HINTS_REALTIME is set, use native qspinlock should be safe
>>> and have better performance than virt_spin_lock.
>> Just be curious, do you have actual data where native qspinlock has
>> better performance than virt_spin_lock()?
>>
>> By my understanding, qspinlock is not friendly with VM. When lock is
>> released, it is acquired with one by one order in contending queue. If
>> the first vCPU in contending queue is preempted, the other vCPUs can
>> not get lock. On physical machine it is almost impossible that CPU
>> contending lock is preempted.
>>
>> Regards
>> Bibo Mao
>>>
>>> Signed-off-by: Liangyan <liangyan.peng@...edance.com>
>>> ---
>>> arch/x86/kernel/kvm.c | 18 +++++++++---------
>>> 1 file changed, 9 insertions(+), 9 deletions(-)
>>>
>>> diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
>>> index 921c1c783bc1..9080544a4007 100644
>>> --- a/arch/x86/kernel/kvm.c
>>> +++ b/arch/x86/kernel/kvm.c
>>> @@ -1072,6 +1072,15 @@ static void kvm_wait(u8 *ptr, u8 val)
>>> */
>>> void __init kvm_spinlock_init(void)
>>> {
>>> + /*
>>> + * Disable PV spinlocks and use native qspinlock when dedicated
>>> pCPUs
>>> + * are available.
>>> + */
>>> + if (kvm_para_has_hint(KVM_HINTS_REALTIME)) {
>>> + pr_info("PV spinlocks disabled with KVM_HINTS_REALTIME
>>> hints\n");
>>> + goto out;
>>> + }
>>> +
>>> /*
>>> * In case host doesn't support KVM_FEATURE_PV_UNHALT there is
>>> still an
>>> * advantage of keeping virt_spin_lock_key enabled:
>>> virt_spin_lock() is
>>> @@ -1082,15 +1091,6 @@ void __init kvm_spinlock_init(void)
>>> return;
>>> }
>>> - /*
>>> - * Disable PV spinlocks and use native qspinlock when dedicated
>>> pCPUs
>>> - * are available.
>>> - */
>>> - if (kvm_para_has_hint(KVM_HINTS_REALTIME)) {
>>> - pr_info("PV spinlocks disabled with KVM_HINTS_REALTIME
>>> hints\n");
>>> - goto out;
>>> - }
>>> -
>>> if (num_possible_cpus() == 1) {
>>> pr_info("PV spinlocks disabled, single CPU\n");
>>> goto out;
>>>
>>
Powered by blists - more mailing lists