[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <f0247587-e90a-1695-1399-47a67c44d861@redhat.com>
Date: Mon, 24 May 2021 15:46:43 +0200
From: Paolo Bonzini <pbonzini@...hat.com>
To: Wanpeng Li <kernellwp@...il.com>, linux-kernel@...r.kernel.org,
kvm@...r.kernel.org
Cc: Sean Christopherson <seanjc@...gle.com>,
Vitaly Kuznetsov <vkuznets@...hat.com>,
Wanpeng Li <wanpengli@...cent.com>,
Jim Mattson <jmattson@...gle.com>,
Joerg Roedel <joro@...tes.org>,
Ben Segall <bsegall@...gle.com>,
Venkatesh Srinivas <venkateshs@...omium.org>,
David Matlack <dmatlack@...gle.com>,
Paul Mackerras <paulus@...abs.org>,
Suraj Jitindar Singh <sjitindarsingh@...il.com>
Subject: Re: [PATCH v4 1/5] KVM: exit halt polling on need_resched() for both
book3s and generic halt-polling
On 18/05/21 14:00, Wanpeng Li wrote:
> From: Wanpeng Li <wanpengli@...cent.com>
>
> Inspired by commit 262de4102c7bb8 (kvm: exit halt polling on need_resched()
> as well), CFS_BANDWIDTH throttling will use resched_task() when there is just
> one task to get the task to block. It was likely allowing VMs to overrun their
> quota when halt polling. Due to PPC implements an arch specific halt polling
> logic, we should add the need_resched() checking there as well. This
> patch adds a helper function that to be shared between book3s and generic
> halt-polling loop.
>
> Reviewed-by: David Matlack <dmatlack@...gle.com>
> Reviewed-by: Venkatesh Srinivas <venkateshs@...omium.org>
> Cc: Ben Segall <bsegall@...gle.com>
> Cc: Venkatesh Srinivas <venkateshs@...omium.org>
> Cc: Jim Mattson <jmattson@...gle.com>
> Cc: David Matlack <dmatlack@...gle.com>
> Cc: Paul Mackerras <paulus@...abs.org>
> Cc: Suraj Jitindar Singh <sjitindarsingh@...il.com>
> Signed-off-by: Wanpeng Li <wanpengli@...cent.com>
> ---
> v3 -> v4:
> * rename to kvm_vcpu_can_poll
> v2 -> v3:
> * add a helper function
> v1 -> v2:
> * update patch description
>
> arch/powerpc/kvm/book3s_hv.c | 2 +-
> include/linux/kvm_host.h | 2 ++
> virt/kvm/kvm_main.c | 8 ++++++--
> 3 files changed, 9 insertions(+), 3 deletions(-)
>
> diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
> index 28a80d240b76..7360350e66ff 100644
> --- a/arch/powerpc/kvm/book3s_hv.c
> +++ b/arch/powerpc/kvm/book3s_hv.c
> @@ -3936,7 +3936,7 @@ static void kvmppc_vcore_blocked(struct kvmppc_vcore *vc)
> break;
> }
> cur = ktime_get();
> - } while (single_task_running() && ktime_before(cur, stop));
> + } while (kvm_vcpu_can_poll(cur, stop));
>
> spin_lock(&vc->lock);
> vc->vcore_state = VCORE_INACTIVE;
> diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
> index 2f34487e21f2..ba682f738a25 100644
> --- a/include/linux/kvm_host.h
> +++ b/include/linux/kvm_host.h
> @@ -1583,4 +1583,6 @@ static inline void kvm_handle_signal_exit(struct kvm_vcpu *vcpu)
> /* Max number of entries allowed for each kvm dirty ring */
> #define KVM_DIRTY_RING_MAX_ENTRIES 65536
>
> +bool kvm_vcpu_can_poll(ktime_t cur, ktime_t stop);
> +
> #endif
> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> index 6b4feb92dc79..62522c12beba 100644
> --- a/virt/kvm/kvm_main.c
> +++ b/virt/kvm/kvm_main.c
> @@ -2945,6 +2945,11 @@ update_halt_poll_stats(struct kvm_vcpu *vcpu, u64 poll_ns, bool waited)
> vcpu->stat.halt_poll_success_ns += poll_ns;
> }
>
> +bool kvm_vcpu_can_poll(ktime_t cur, ktime_t stop)
> +{
> + return single_task_running() && !need_resched() && ktime_before(cur, stop);
> +}
> +
> /*
> * The vCPU has executed a HLT instruction with in-kernel mode enabled.
> */
> @@ -2973,8 +2978,7 @@ void kvm_vcpu_block(struct kvm_vcpu *vcpu)
> goto out;
> }
> poll_end = cur = ktime_get();
> - } while (single_task_running() && !need_resched() &&
> - ktime_before(cur, stop));
> + } while (kvm_vcpu_can_poll(cur, stop));
> }
>
> prepare_to_rcuwait(&vcpu->wait);
>
Queued all five, thanks.
Paolo
Powered by blists - more mailing lists