[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5ffaea5b-fb07-0141-cab8-6dce39071abe@redhat.com>
Date: Wed, 24 Jul 2019 14:17:14 +0200
From: Paolo Bonzini <pbonzini@...hat.com>
To: Wanpeng Li <kernellwp@...il.com>, linux-kernel@...r.kernel.org,
kvm@...r.kernel.org
Cc: Radim Krčmář <rkrcmar@...hat.com>,
Waiman Long <longman@...hat.com>,
Peter Zijlstra <peterz@...radead.org>
Subject: Re: [PATCH] KVM: X86: Boost queue head vCPU to mitigate lock waiter
preemption
On 24/07/19 11:43, Wanpeng Li wrote:
> From: Wanpeng Li <wanpengli@...cent.com>
>
> Commit 11752adb (locking/pvqspinlock: Implement hybrid PV queued/unfair locks)
> introduces hybrid PV queued/unfair locks
> - queued mode (no starvation)
> - unfair mode (good performance on not heavily contended lock)
> The lock waiter goes into the unfair mode especially in VMs with over-commit
> vCPUs since increaing over-commitment increase the likehood that the queue
> head vCPU may have been preempted and not actively spinning.
>
> However, reschedule queue head vCPU timely to acquire the lock still can get
> better performance than just depending on lock stealing in over-subscribe
> scenario.
>
> Testing on 80 HT 2 socket Xeon Skylake server, with 80 vCPUs VM 80GB RAM:
> ebizzy -M
> vanilla boosting improved
> 1VM 23520 25040 6%
> 2VM 8000 13600 70%
> 3VM 3100 5400 74%
>
> The lock holder vCPU yields to the queue head vCPU when unlock, to boost queue
> head vCPU which is involuntary preemption or the one which is voluntary halt
> due to fail to acquire the lock after a short spin in the guest.
Clever! I have applied the patch.
Paolo
> Cc: Waiman Long <longman@...hat.com>
> Cc: Peter Zijlstra <peterz@...radead.org>
> Cc: Paolo Bonzini <pbonzini@...hat.com>
> Cc: Radim Krčmář <rkrcmar@...hat.com>
> Signed-off-by: Wanpeng Li <wanpengli@...cent.com>
> ---
> arch/x86/kvm/x86.c | 3 ++-
> 1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index 01e18ca..c6d951c 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -7206,7 +7206,7 @@ static void kvm_sched_yield(struct kvm *kvm, unsigned long dest_id)
>
> rcu_read_unlock();
>
> - if (target)
> + if (target && READ_ONCE(target->ready))
> kvm_vcpu_yield_to(target);
> }
>
> @@ -7246,6 +7246,7 @@ int kvm_emulate_hypercall(struct kvm_vcpu *vcpu)
> break;
> case KVM_HC_KICK_CPU:
> kvm_pv_kick_cpu_op(vcpu->kvm, a0, a1);
> + kvm_sched_yield(vcpu->kvm, a1);
> ret = 0;
> break;
> #ifdef CONFIG_X86_64
>
Powered by blists - more mailing lists