[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <96d0bcf5-a18e-770d-3962-a8c330a2f803@redhat.com>
Date: Fri, 5 Jul 2019 17:46:41 +0200
From: Paolo Bonzini <pbonzini@...hat.com>
To: Wanpeng Li <kernellwp@...il.com>, linux-kernel@...r.kernel.org,
kvm@...r.kernel.org
Cc: Radim Krčmář <rkrcmar@...hat.com>
Subject: Re: [PATCH v2] KVM: LAPIC: Retry tune per-vCPU timer_advance_ns if
adaptive tuning goes insane
On 05/07/19 17:23, Wanpeng Li wrote:
> From: Wanpeng Li <wanpengli@...cent.com>
>
> Retry tune per-vCPU timer_advance_ns if adaptive tuning goes insane which
> can happen sporadically in product environment.
>
> Cc: Paolo Bonzini <pbonzini@...hat.com>
> Cc: Radim Krčmář <rkrcmar@...hat.com>
> Signed-off-by: Wanpeng Li <wanpengli@...cent.com>
> ---
> v1 -> v2:
> * retry max 10 times if adaptive tuning goes insane
Is there any advantage at stopping the retry (also it should not be a
local variable of course).
Paolo
> arch/x86/kvm/lapic.c | 7 +++++--
> 1 file changed, 5 insertions(+), 2 deletions(-)
>
> diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c
> index 95affa5..bd0dbe5 100644
> --- a/arch/x86/kvm/lapic.c
> +++ b/arch/x86/kvm/lapic.c
> @@ -1538,6 +1538,7 @@ static inline void adjust_lapic_timer_advance(struct kvm_vcpu *vcpu,
> struct kvm_lapic *apic = vcpu->arch.apic;
> u32 timer_advance_ns = apic->lapic_timer.timer_advance_ns;
> u64 ns;
> + uint retry_count = 0;
>
> /* too early */
> if (advance_expire_delta < 0) {
> @@ -1556,8 +1557,10 @@ static inline void adjust_lapic_timer_advance(struct kvm_vcpu *vcpu,
> if (abs(advance_expire_delta) < LAPIC_TIMER_ADVANCE_ADJUST_DONE)
> apic->lapic_timer.timer_advance_adjust_done = true;
> if (unlikely(timer_advance_ns > 5000)) {
> - timer_advance_ns = 0;
> - apic->lapic_timer.timer_advance_adjust_done = true;
> + timer_advance_ns = 1000;
> + apic->lapic_timer.timer_advance_adjust_done = false;
> + if (++retry_count > 10)
> + apic->lapic_timer.timer_advance_adjust_done = true;
> }
> apic->lapic_timer.timer_advance_ns = timer_advance_ns;
> }
>
Powered by blists - more mailing lists