[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1257462632.6560.8.camel@marge.simson.net>
Date: Fri, 06 Nov 2009 00:10:32 +0100
From: Mike Galbraith <efault@....de>
To: Ingo Molnar <mingo@...e.hu>,
Peter Zijlstra <a.p.zijlstra@...llo.nl>
Cc: Eric Paris <eparis@...hat.com>, linux-kernel@...r.kernel.org,
hpa@...or.com, tglx@...utronix.de,
Lai Jiangshan <laijs@...fujitsu.com>
Subject: [patch] Re: There is something with scheduler (was Re: [patch] Re:
[regression bisect -next] BUG: using smp_processor_id() in preemptible
[00000000] code: rmmod)
A bit of late night cut/paste fixed it right up, so tomorrow, I can redo
benchmarks etc etc.
Lai, mind giving this a try? I believe this will fix your problem as
well as mine.
sched: fix runqueue locking buglet.
Calling set_task_cpu() with the runqueue unlocked is unsafe. Add cpu_rq_lock()
locking primitive, and lock the runqueue. Also, update rq->clock before calling
set_task_cpu(), as it could be stale.
Running netperf UDP_STREAM with two pinned tasks with tip 1b9508f applied emitted
the thoroughly unbelievable result that ratelimiting newidle could produce twice
the throughput of the virgin kernel. Reverting to locking the runqueue prior to
runqueue selection restored benchmarking sanity, as finally did this patchlet.
Before:
git v2.6.32-rc6-26-g91d3f9b virgin
Socket Message Elapsed Messages
Size Size Time Okay Errors Throughput
bytes bytes secs # # 10^6bits/sec
65536 4096 60.00 7340005 0 4008.62
65536 60.00 7320453 3997.94
git v2.6.32-rc6-26-g91d3f9b with only 1b9508f
Socket Message Elapsed Messages
Size Size Time Okay Errors Throughput
bytes bytes secs # # 10^6bits/sec
65536 4096 60.00 15018541 0 8202.12
65536 60.00 15018232 8201.96
After:
git v2.6.32-rc6-26-g91d3f9b with only 1b9508f + this patch
Socket Message Elapsed Messages
Size Size Time Okay Errors Throughput
bytes bytes secs # # 10^6bits/sec
65536 4096 60.00 7780289 0 4249.07
65536 60.00 7779832 4248.82
Signed-off-by: Mike Galbraith <efault@....de>
Cc: Ingo Molnar <mingo@...e.hu>
Cc: Peter Zijlstra <a.p.zijlstra@...llo.nl>
LKML-Reference: <new-submission>
---
kernel/sched.c | 38 +++++++++++++++++++++++++++++++-------
1 file changed, 31 insertions(+), 7 deletions(-)
Index: linux-2.6.32.git/kernel/sched.c
===================================================================
--- linux-2.6.32.git.orig/kernel/sched.c
+++ linux-2.6.32.git/kernel/sched.c
@@ -1011,6 +1011,32 @@ static struct rq *this_rq_lock(void)
return rq;
}
+/*
+ * cpu_rq_lock - lock the runqueue a given task resides on and disable
+ * interrupts. Note the ordering: we can safely lookup the cpu_rq without
+ * explicitly disabling preemption.
+ */
+static struct rq *cpu_rq_lock(int cpu, unsigned long *flags)
+ __acquires(rq->lock)
+{
+ struct rq *rq;
+
+ for (;;) {
+ local_irq_save(*flags);
+ rq = cpu_rq(cpu);
+ spin_lock(&rq->lock);
+ if (likely(rq == cpu_rq(cpu)))
+ return rq;
+ spin_unlock_irqrestore(&rq->lock, *flags);
+ }
+}
+
+static inline void cpu_rq_unlock(struct rq *rq, unsigned long *flags)
+ __releases(rq->lock)
+{
+ spin_unlock_irqrestore(&rq->lock, *flags);
+}
+
#ifdef CONFIG_SCHED_HRTICK
/*
* Use HR-timers to deliver accurate preemption points.
@@ -2345,13 +2371,12 @@ static int try_to_wake_up(struct task_st
task_rq_unlock(rq, &flags);
cpu = p->sched_class->select_task_rq(p, SD_BALANCE_WAKE, wake_flags);
- if (cpu != orig_cpu)
- set_task_cpu(p, cpu);
-
- rq = task_rq_lock(p, &flags);
-
- if (rq != orig_rq)
+ if (cpu != orig_cpu) {
+ rq = cpu_rq_lock(cpu, &flags);
update_rq_clock(rq);
+ set_task_cpu(p, cpu);
+ } else
+ rq = task_rq_lock(p, &flags);
if (rq->idle_stamp) {
u64 delta = rq->clock - rq->idle_stamp;
@@ -2365,7 +2390,6 @@ static int try_to_wake_up(struct task_st
}
WARN_ON(p->state != TASK_WAKING);
- cpu = task_cpu(p);
#ifdef CONFIG_SCHEDSTATS
schedstat_inc(rq, ttwu_count);
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists