lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 30 Aug 2012 00:51:01 +0530
From:	Raghavendra K T <raghavendra.kt@...ux.vnet.ibm.com>
To:	Avi Kivity <avi@...hat.com>, Marcelo Tosatti <mtosatti@...hat.com>,
	Rik van Riel <riel@...hat.com>
Cc:	Srikar <srikar@...ux.vnet.ibm.com>,
	"Nikunj A. Dadhania" <nikunj@...ux.vnet.ibm.com>,
	KVM <kvm@...r.kernel.org>,
	Raghavendra K T <raghavendra.kt@...ux.vnet.ibm.com>,
	LKML <linux-kernel@...r.kernel.org>,
	Srivatsa Vaddagiri <srivatsa.vaddagiri@...il.com>,
	Gleb Natapov <gleb@...hat.com>
Subject: [PATCH RFC 1/1] kvm: Use vcpu_id as pivot instead of last boosted vcpu in PLE handler

 The idea of starting from next vcpu (source of yield_to + 1) seem to work
 well for overcomitted guest rather than using last boosted vcpu. We can also
 remove per VM variable with this approach.
 
 Iteration for eligible candidate after this patch starts from vcpu source+1
 and ends at source-1 (after wrapping)
 
 Thanks Nikunj for his quick verification of the patch.
 
 Please let me know if this patch is interesting and makes sense.

====8<====
From: Raghavendra K T <raghavendra.kt@...ux.vnet.ibm.com>

 Currently we use next vcpu to last boosted vcpu as starting point
 while deciding eligible vcpu for directed yield.

 In overcomitted scenarios, if more vcpu try to do directed yield,
 they start from same vcpu, resulting in wastage of cpu time (because of
 failing yields and double runqueue lock).
 
 Since probability of same vcpu trying to do directed yield is already
 prevented by improved PLE handler, we can start from next vcpu from source
 of yield_to.

Suggested-by: Srikar Dronamraju <srikar@...ux.vnet.ibm.com>
Signed-off-by: Raghavendra K T <raghavendra.kt@...ux.vnet.ibm.com>
---

 include/linux/kvm_host.h |    1 -
 virt/kvm/kvm_main.c      |   12 ++++--------
 2 files changed, 4 insertions(+), 9 deletions(-)

diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index b70b48b..64a090d 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -275,7 +275,6 @@ struct kvm {
 #endif
 	struct kvm_vcpu *vcpus[KVM_MAX_VCPUS];
 	atomic_t online_vcpus;
-	int last_boosted_vcpu;
 	struct list_head vm_list;
 	struct mutex lock;
 	struct kvm_io_bus *buses[KVM_NR_BUSES];
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index 2468523..65a6c83 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -1584,7 +1584,6 @@ void kvm_vcpu_on_spin(struct kvm_vcpu *me)
 {
 	struct kvm *kvm = me->kvm;
 	struct kvm_vcpu *vcpu;
-	int last_boosted_vcpu = me->kvm->last_boosted_vcpu;
 	int yielded = 0;
 	int pass;
 	int i;
@@ -1594,21 +1593,18 @@ void kvm_vcpu_on_spin(struct kvm_vcpu *me)
 	 * currently running, because it got preempted by something
 	 * else and called schedule in __vcpu_run.  Hopefully that
 	 * VCPU is holding the lock that we need and will release it.
-	 * We approximate round-robin by starting at the last boosted VCPU.
+	 * We approximate round-robin by starting at the next VCPU.
 	 */
 	for (pass = 0; pass < 2 && !yielded; pass++) {
 		kvm_for_each_vcpu(i, vcpu, kvm) {
-			if (!pass && i <= last_boosted_vcpu) {
-				i = last_boosted_vcpu;
+			if (!pass && i <= me->vcpu_id) {
+				i = me->vcpu_id;
 				continue;
-			} else if (pass && i > last_boosted_vcpu)
+			} else if (pass && i >= me->vcpu_id)
 				break;
-			if (vcpu == me)
-				continue;
 			if (waitqueue_active(&vcpu->wq))
 				continue;
 			if (kvm_vcpu_yield_to(vcpu)) {
-				kvm->last_boosted_vcpu = i;
 				yielded = 1;
 				break;
 			}

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ