[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <4868DCC7.BA47.005A.0@novell.com>
Date: Mon, 30 Jun 2008 11:16:55 -0600
From: "Gregory Haskins" <ghaskins@...ell.com>
To: "Ingo Molnar" <mingo@...e.hu>
Cc: <rostedt@...dmis.org>, <peterz@...radead.org>, <npiggin@...e.de>,
<linux-kernel@...r.kernel.org>, <linux-rt-users@...r.kernel.org>
Subject: Re: [PATCH 0/3] sched: newidle and RT wake-buddy fixes
>>> On Mon, Jun 30, 2008 at 9:15 AM, in message <20080630131511.GA7506@...e.hu>,
Ingo Molnar <mingo@...e.hu> wrote:
> * Gregory Haskins <ghaskins@...ell.com> wrote:
>
>> Hi Ingo,
>> The following patches apply to linux-tip/sched/devel and enhance the
>> performance of the kernel (specifically in PREEMPT_RT, though they do
>> not regress mainline performance as far as I can tell). They offer
>> somewhere between 50-100% speedups in netperf performance, depending
>> on the test.
>
> -tip testing found this boot hang:
I may have found the issue: It looks like the hunk that initially disables interrupts in load_balance_newidle() was inadvertently applied to load_balance() instead during the
merge to linux-tip. If you fold the following patch into my original patch, it should set
things right again.
-----
sched: fix merge problem with newidle enhancement patch
From: Gregory Haskins <ghaskins@...ell.com>
commit cc8160c56843201891766660e3816d2e546c1b17 introduces a locking
enhancement for newidle. However, one hunk misapplied to load_balance
instead of load_balance_newidle. This fixes the issue.
Signed-off-by: Gregory Haskins <ghaskins@...ell.com>
---
kernel/sched.c | 18 +++++++++---------
1 files changed, 9 insertions(+), 9 deletions(-)
diff --git a/kernel/sched.c b/kernel/sched.c
index f35d73c..f36406f 100644
--- a/kernel/sched.c
+++ b/kernel/sched.c
@@ -3459,15 +3459,6 @@ static int load_balance(int this_cpu, struct rq *this_rq,
cpus_setall(*cpus);
- schedstat_inc(sd, lb_count[CPU_NEWLY_IDLE]);
-
- /*
- * We are in a preempt-disabled section, so dropping the lock/irq
- * here simply means that other cores may acquire the lock,
- * and interrupts may occur.
- */
- spin_unlock_irq(&this_rq->lock);
-
/*
* When power savings policy is enabled for the parent domain, idle
* sibling can pick up load irrespective of busy siblings. In this case,
@@ -3630,6 +3621,15 @@ load_balance_newidle(int this_cpu, struct rq *this_rq, struct sched_domain *sd,
cpus_setall(*cpus);
+ schedstat_inc(sd, lb_count[CPU_NEWLY_IDLE]);
+
+ /*
+ * We are in a preempt-disabled section, so dropping the lock/irq
+ * here simply means that other cores may acquire the lock,
+ * and interrupts may occur.
+ */
+ spin_unlock_irq(&this_rq->lock);
+
/*
* When power savings policy is enabled for the parent domain, idle
* sibling can pick up load irrespective of busy siblings. In this case,
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists