lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 24 Sep 2012 14:36:05 +0200
From:	Peter Zijlstra <peterz@...radead.org>
To:	Raghavendra K T <raghavendra.kt@...ux.vnet.ibm.com>
Cc:	"H. Peter Anvin" <hpa@...or.com>,
	Marcelo Tosatti <mtosatti@...hat.com>,
	Ingo Molnar <mingo@...hat.com>, Avi Kivity <avi@...hat.com>,
	Rik van Riel <riel@...hat.com>,
	Srikar <srikar@...ux.vnet.ibm.com>,
	"Nikunj A. Dadhania" <nikunj@...ux.vnet.ibm.com>,
	KVM <kvm@...r.kernel.org>, Jiannan Ouyang <ouyang@...pitt.edu>,
	chegu vinod <chegu_vinod@...com>,
	"Andrew M. Theurer" <habanero@...ux.vnet.ibm.com>,
	LKML <linux-kernel@...r.kernel.org>,
	Srivatsa Vaddagiri <srivatsa.vaddagiri@...il.com>,
	Gleb Natapov <gleb@...hat.com>,
	Andrew Jones <drjones@...hat.com>
Subject: Re: [PATCH RFC 0/2] kvm: Improving undercommit,overcommit scenarios
 in PLE handler

On Mon, 2012-09-24 at 17:22 +0530, Raghavendra K T wrote:
> On 09/24/2012 05:04 PM, Peter Zijlstra wrote:
> > On Fri, 2012-09-21 at 17:29 +0530, Raghavendra K T wrote:
> >> In some special scenarios like #vcpu<= #pcpu, PLE handler may
> >> prove very costly, because there is no need to iterate over vcpus
> >> and do unsuccessful yield_to burning CPU.
> >
> > What's the costly thing? The vm-exit, the yield (which should be a nop
> > if its the only task there) or something else entirely?
> >
> Both vmexit and yield_to() actually,
> 
> because unsuccessful yield_to() overall is costly in PLE handler.
> 
> This is because when we have large guests, say 32/16 vcpus, and one
> vcpu is holding lock, rest of the vcpus waiting for the lock, when they
> do PL-exit, each of the vcpu try to iterate over rest of vcpu list in
> the VM and try to do directed yield (unsuccessful). (O(n^2) tries).
> 
> this results is fairly high amount of cpu burning and double run queue
> lock contention.
> 
> (if they were spinning probably lock progress would have been faster).
> As Avi/Chegu Vinod had felt it is better to avoid vmexit itself, which
> seems little complex to achieve currently.

OK, so the vmexit stays and we need to improve yield_to.

How about something like the below, that would allow breaking out of the
for-each-vcpu loop and simply going back into the vm, right?

---
 kernel/sched/core.c | 25 +++++++++++++++++++------
 1 file changed, 19 insertions(+), 6 deletions(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index b38f00e..5d5b355 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -4272,7 +4272,10 @@ EXPORT_SYMBOL(yield);
  * It's the caller's job to ensure that the target task struct
  * can't go away on us before we can do any checks.
  *
- * Returns true if we indeed boosted the target task.
+ * Returns:
+ *   true (>0) if we indeed boosted the target task.
+ *   false (0) if we failed to boost the target.
+ *   -ESRCH if there's no task to yield to.
  */
 bool __sched yield_to(struct task_struct *p, bool preempt)
 {
@@ -4284,6 +4287,15 @@ bool __sched yield_to(struct task_struct *p, bool preempt)
 	local_irq_save(flags);
 	rq = this_rq();
 
+	/*
+	 * If we're the only runnable task on the rq, there's absolutely no
+	 * point in yielding.
+	 */
+	if (rq->nr_running == 1) {
+		yielded = -ESRCH;
+		goto out_irq;
+	}
+
 again:
 	p_rq = task_rq(p);
 	double_rq_lock(rq, p_rq);
@@ -4293,13 +4305,13 @@ bool __sched yield_to(struct task_struct *p, bool preempt)
 	}
 
 	if (!curr->sched_class->yield_to_task)
-		goto out;
+		goto out_unlock;
 
 	if (curr->sched_class != p->sched_class)
-		goto out;
+		goto out_unlock;
 
 	if (task_running(p_rq, p) || p->state)
-		goto out;
+		goto out_unlock;
 
 	yielded = curr->sched_class->yield_to_task(rq, p, preempt);
 	if (yielded) {
@@ -4312,11 +4324,12 @@ bool __sched yield_to(struct task_struct *p, bool preempt)
 			resched_task(p_rq->curr);
 	}
 
-out:
+out_unlock:
 	double_rq_unlock(rq, p_rq);
+out_irq:
 	local_irq_restore(flags);
 
-	if (yielded)
+	if (yielded > 0)
 		schedule();
 
 	return yielded;

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists