[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <1621260028-6467-1-git-send-email-wanpengli@tencent.com>
Date: Mon, 17 May 2021 07:00:24 -0700
From: Wanpeng Li <kernellwp@...il.com>
To: linux-kernel@...r.kernel.org, kvm@...r.kernel.org
Cc: Paolo Bonzini <pbonzini@...hat.com>,
Sean Christopherson <seanjc@...gle.com>,
Vitaly Kuznetsov <vkuznets@...hat.com>,
Wanpeng Li <wanpengli@...cent.com>,
Jim Mattson <jmattson@...gle.com>,
Joerg Roedel <joro@...tes.org>,
Ben Segall <bsegall@...gle.com>,
Venkatesh Srinivas <venkateshs@...omium.org>,
David Matlack <dmatlack@...gle.com>,
Paul Mackerras <paulus@...abs.org>,
Suraj Jitindar Singh <sjitindarsingh@...il.com>
Subject: [PATCH v3 1/5] KVM: exit halt polling on need_resched() for both book3s and generic halt-polling
From: Wanpeng Li <wanpengli@...cent.com>
Inspired by commit 262de4102c7bb8 (kvm: exit halt polling on need_resched()
as well), CFS_BANDWIDTH throttling will use resched_task() when there is just
one task to get the task to block. It was likely allowing VMs to overrun their
quota when halt polling. Due to PPC implements an arch specific halt polling
logic, we should add the need_resched() checking there as well. This
patch adds a helper function that to be shared between book3s and generic
halt-polling loop.
Cc: Ben Segall <bsegall@...gle.com>
Cc: Venkatesh Srinivas <venkateshs@...omium.org>
Cc: Jim Mattson <jmattson@...gle.com>
Cc: David Matlack <dmatlack@...gle.com>
Cc: Paul Mackerras <paulus@...abs.org>
Cc: Suraj Jitindar Singh <sjitindarsingh@...il.com>
Signed-off-by: Wanpeng Li <wanpengli@...cent.com>
---
v2 -> v3:
* add a helper function
v1 -> v2:
* update patch description
arch/powerpc/kvm/book3s_hv.c | 2 +-
include/linux/kvm_host.h | 2 ++
virt/kvm/kvm_main.c | 9 +++++++--
3 files changed, 10 insertions(+), 3 deletions(-)
diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
index 28a80d240b76..360165df345b 100644
--- a/arch/powerpc/kvm/book3s_hv.c
+++ b/arch/powerpc/kvm/book3s_hv.c
@@ -3936,7 +3936,7 @@ static void kvmppc_vcore_blocked(struct kvmppc_vcore *vc)
break;
}
cur = ktime_get();
- } while (single_task_running() && ktime_before(cur, stop));
+ } while (kvm_vcpu_can_block(cur, stop));
spin_lock(&vc->lock);
vc->vcore_state = VCORE_INACTIVE;
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index 2f34487e21f2..bf4fd60c4699 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -1583,4 +1583,6 @@ static inline void kvm_handle_signal_exit(struct kvm_vcpu *vcpu)
/* Max number of entries allowed for each kvm dirty ring */
#define KVM_DIRTY_RING_MAX_ENTRIES 65536
+bool kvm_vcpu_can_block(ktime_t cur, ktime_t stop);
+
#endif
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index 6b4feb92dc79..c81080667fd1 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -2945,6 +2945,12 @@ update_halt_poll_stats(struct kvm_vcpu *vcpu, u64 poll_ns, bool waited)
vcpu->stat.halt_poll_success_ns += poll_ns;
}
+
+bool kvm_vcpu_can_block(ktime_t cur, ktime_t stop)
+{
+ return single_task_running() && !need_resched() && ktime_before(cur, stop);
+}
+
/*
* The vCPU has executed a HLT instruction with in-kernel mode enabled.
*/
@@ -2973,8 +2979,7 @@ void kvm_vcpu_block(struct kvm_vcpu *vcpu)
goto out;
}
poll_end = cur = ktime_get();
- } while (single_task_running() && !need_resched() &&
- ktime_before(cur, stop));
+ } while (kvm_vcpu_can_block(cur, stop));
}
prepare_to_rcuwait(&vcpu->wait);
--
2.25.1
Powered by blists - more mailing lists