[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <9c49c6ff-d896-e6a5-c051-b6707f6ec58a@redhat.com>
Date: Sat, 17 Apr 2021 15:09:08 +0200
From: Paolo Bonzini <pbonzini@...hat.com>
To: Wanpeng Li <kernellwp@...il.com>, linux-kernel@...r.kernel.org,
kvm@...r.kernel.org
Cc: Sean Christopherson <seanjc@...gle.com>,
Vitaly Kuznetsov <vkuznets@...hat.com>,
Wanpeng Li <wanpengli@...cent.com>,
Jim Mattson <jmattson@...gle.com>,
Joerg Roedel <joro@...tes.org>
Subject: Re: [PATCH] KVM: Boost vCPU candidiate in user mode which is
delivering interrupt
On 16/04/21 05:08, Wanpeng Li wrote:
> From: Wanpeng Li <wanpengli@...cent.com>
>
> Both lock holder vCPU and IPI receiver that has halted are condidate for
> boost. However, the PLE handler was originally designed to deal with the
> lock holder preemption problem. The Intel PLE occurs when the spinlock
> waiter is in kernel mode. This assumption doesn't hold for IPI receiver,
> they can be in either kernel or user mode. the vCPU candidate in user mode
> will not be boosted even if they should respond to IPIs. Some benchmarks
> like pbzip2, swaptions etc do the TLB shootdown in kernel mode and most
> of the time they are running in user mode. It can lead to a large number
> of continuous PLE events because the IPI sender causes PLE events
> repeatedly until the receiver is scheduled while the receiver is not
> candidate for a boost.
>
> This patch boosts the vCPU candidiate in user mode which is delivery
> interrupt. We can observe the speed of pbzip2 improves 10% in 96 vCPUs
> VM in over-subscribe scenario (The host machine is 2 socket, 48 cores,
> 96 HTs Intel CLX box). There is no performance regression for other
> benchmarks like Unixbench spawn (most of the time contend read/write
> lock in kernel mode), ebizzy (most of the time contend read/write sem
> and TLB shoodtdown in kernel mode).
>
> +bool kvm_arch_interrupt_delivery(struct kvm_vcpu *vcpu)
> +{
> + if (vcpu->arch.apicv_active && static_call(kvm_x86_dy_apicv_has_pending_interrupt)(vcpu))
> + return true;
> +
> + return false;
> +}
Can you reuse vcpu_dy_runnable instead of this new function?
Paolo
> bool kvm_arch_vcpu_in_kernel(struct kvm_vcpu *vcpu)
> {
> return vcpu->arch.preempted_in_kernel;
> diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
> index 3b06d12..5012fc4 100644
> --- a/include/linux/kvm_host.h
> +++ b/include/linux/kvm_host.h
> @@ -954,6 +954,7 @@ int kvm_arch_vcpu_runnable(struct kvm_vcpu *vcpu);
> bool kvm_arch_vcpu_in_kernel(struct kvm_vcpu *vcpu);
> int kvm_arch_vcpu_should_kick(struct kvm_vcpu *vcpu);
> bool kvm_arch_dy_runnable(struct kvm_vcpu *vcpu);
> +bool kvm_arch_interrupt_delivery(struct kvm_vcpu *vcpu);
> int kvm_arch_post_init_vm(struct kvm *kvm);
> void kvm_arch_pre_destroy_vm(struct kvm *kvm);
>
> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> index 0a481e7..781d2db 100644
> --- a/virt/kvm/kvm_main.c
> +++ b/virt/kvm/kvm_main.c
> @@ -3012,6 +3012,11 @@ static bool vcpu_dy_runnable(struct kvm_vcpu *vcpu)
> return false;
> }
>
> +bool __weak kvm_arch_interrupt_delivery(struct kvm_vcpu *vcpu)
> +{
> + return false;
> +}
> +
> void kvm_vcpu_on_spin(struct kvm_vcpu *me, bool yield_to_kernel_mode)
> {
> struct kvm *kvm = me->kvm;
> @@ -3045,6 +3050,7 @@ void kvm_vcpu_on_spin(struct kvm_vcpu *me, bool yield_to_kernel_mode)
> !vcpu_dy_runnable(vcpu))
> continue;
> if (READ_ONCE(vcpu->preempted) && yield_to_kernel_mode &&
> + !kvm_arch_interrupt_delivery(vcpu) &&
> !kvm_arch_vcpu_in_kernel(vcpu))
> continue;
> if (!kvm_vcpu_eligible_for_directed_yield(vcpu))
>
Powered by blists - more mailing lists