[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20121029140717.15448.83182.sendpatchset@codeblue>
Date: Mon, 29 Oct 2012 19:37:18 +0530
From: Raghavendra K T <raghavendra.kt@...ux.vnet.ibm.com>
To: Peter Zijlstra <peterz@...radead.org>,
"H. Peter Anvin" <hpa@...or.com>, Avi Kivity <avi@...hat.com>,
Ingo Molnar <mingo@...hat.com>,
Marcelo Tosatti <mtosatti@...hat.com>,
Rik van Riel <riel@...hat.com>
Cc: Srikar <srikar@...ux.vnet.ibm.com>,
"Nikunj A. Dadhania" <nikunj@...ux.vnet.ibm.com>,
KVM <kvm@...r.kernel.org>,
Raghavendra K T <raghavendra.kt@...ux.vnet.ibm.com>,
Jiannan Ouyang <ouyang@...pitt.edu>,
Chegu Vinod <chegu_vinod@...com>,
"Andrew M. Theurer" <habanero@...ux.vnet.ibm.com>,
LKML <linux-kernel@...r.kernel.org>,
Srivatsa Vaddagiri <srivatsa.vaddagiri@...il.com>,
Gleb Natapov <gleb@...hat.com>,
Andrew Jones <drjones@...hat.com>
Subject: [PATCH V2 RFC 3/3] kvm: Check system load and handle different commit cases accordingly
From: Raghavendra K T <raghavendra.kt@...ux.vnet.ibm.com>
The patch indroduces a helper function that calculates the system load
(idea borrowed from loadavg calculation). The load is normalized to
2048 i.e., return value (threshold) of 2048 implies an approximate 1:1
committed guest.
In undercommit cases (threshold/2) we simply return from PLE handler.
In overcommit cases (1.75 * threshold) we do a yield(). The rationale is to
allow other VMs of the host to run instead of burning the cpu cycle.
Reviewed-by: Srikar Dronamraju <srikar@...ux.vnet.ibm.com>
Signed-off-by: Raghavendra K T <raghavendra.kt@...ux.vnet.ibm.com>
---
Idea of yielding in overcommit cases (especially in large number of small guest
cases was
Acked-by: Rik van Riel <riel@...hat.com>
Andrew Theurer also has stressed the importance of reducing yield_to
overhead and using yield().
(let threshold = 2048)
Rationale for using threshold/2 for undercommit limit:
Having a load below (0.5 * threshold) is used to avoid (the concern rasied by Rik)
scenarios where we still have lock holder preempted vcpu waiting to be
scheduled. (scenario arises when rq length is > 1 even when we are under
committed)
Rationale for using (1.75 * threshold) for overcommit scenario:
This is a heuristic where we should probably see rq length > 1
and a vcpu of a different VM is waiting to be scheduled.
virt/kvm/kvm_main.c | 35 +++++++++++++++++++++++++++++++++++
1 file changed, 35 insertions(+)
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index e376434..28bbdfb 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -1697,15 +1697,43 @@ bool kvm_vcpu_eligible_for_directed_yield(struct kvm_vcpu *vcpu)
}
#endif
+/*
+ * A load of 2048 corresponds to 1:1 overcommit
+ * undercommit threshold is half the 1:1 overcommit
+ * overcommit threshold is 1.75 times of 1:1 overcommit threshold
+ */
+#define COMMIT_THRESHOLD (FIXED_1)
+#define UNDERCOMMIT_THRESHOLD (COMMIT_THRESHOLD >> 1)
+#define OVERCOMMIT_THRESHOLD ((COMMIT_THRESHOLD << 1) - (COMMIT_THRESHOLD >> 2))
+
+unsigned long kvm_system_load(void)
+{
+ unsigned long load;
+
+ load = avenrun[0] + FIXED_1/200;
+ load = load / num_online_cpus();
+
+ return load;
+}
+
void kvm_vcpu_on_spin(struct kvm_vcpu *me)
{
struct kvm *kvm = me->kvm;
struct kvm_vcpu *vcpu;
int last_boosted_vcpu = me->kvm->last_boosted_vcpu;
int yielded = 0;
+ unsigned long load;
int pass;
int i;
+ load = kvm_system_load();
+ /*
+ * When we are undercomitted let us not waste time in
+ * iterating over all the VCPUs.
+ */
+ if (load < UNDERCOMMIT_THRESHOLD)
+ return;
+
kvm_vcpu_set_in_spin_loop(me, true);
/*
* We boost the priority of a VCPU that is runnable but not
@@ -1735,6 +1763,13 @@ void kvm_vcpu_on_spin(struct kvm_vcpu *me)
break;
}
}
+ /*
+ * If we are not able to yield especially in overcommit cases
+ * let us be courteous to other VM's VCPUs waiting to be scheduled.
+ */
+ if (!yielded && load > OVERCOMMIT_THRESHOLD)
+ yield();
+
kvm_vcpu_set_in_spin_loop(me, false);
/* Ensure vcpu is not eligible during next spinloop */
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists