[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1294072517.2016.101.camel@laptop>
Date: Mon, 03 Jan 2011 17:35:17 +0100
From: Peter Zijlstra <a.p.zijlstra@...llo.nl>
To: Oleg Nesterov <oleg@...hat.com>
Cc: Yong Zhang <yong.zhang0@...il.com>,
Chris Mason <chris.mason@...cle.com>,
Frank Rowand <frank.rowand@...sony.com>,
Ingo Molnar <mingo@...e.hu>,
Thomas Gleixner <tglx@...utronix.de>,
Mike Galbraith <efault@....de>, Paul Turner <pjt@...gle.com>,
Jens Axboe <axboe@...nel.dk>,
Steven Rostedt <rostedt@...dmis.org>,
linux-kernel@...r.kernel.org
Subject: Re: [RFC][PATCH 08/17] sched: Drop the rq argument to
sched_class::select_task_rq()
On Mon, 2011-01-03 at 16:49 +0100, Oleg Nesterov wrote:
> Ah, sorry for the confusion, I only meant sched_exec() case.
> set_cpus_allowed_ptr() does need need_migrate_task(), of course.
OK, removed the sched_exec() test, we'll see if anything explodes ;-)
> As for set_cpus_allowed_ptr()->need_migrate_task() path, I have another
> question,
>
> static bool need_migrate_task(struct task_struct *p)
> {
> /*
> * If the task is not on a runqueue (and not running), then
> * the next wake-up will properly place the task.
> */
> smp_rmb(); /* finish_lock_switch() */
> return p->on_rq || p->on_cpu;
> }
>
> I don't understand this smp_rmb(). Yes, finish_lock_switch() does
> wmb() before it clears ->on_cpu, but how these 2 barriers can pair?
You mean the fact that I fouled up and didn't cross them (both are
before)? I placed the rmb after reading on_cpu.
> In fact, I am completely confused. I do not understand why do we
> check task_running() at all. If we see on_rq == 0 && on_cpu == 1,
> then this task is going to clear its on_cpu soon, once it finishes
> context_switch().
> Probably, this check was needed before, try_to_wake_up() could
> activate the task_running() task without migrating. But, at first
> glance, this is no longer possible after this series?
It can still do that, I think the problem is us dropping rq->lock in the
middle of schedule(), when the freshly woken migration thread comes in
between there and moves the task away, you can get into the situation
that two cpus reference the same task_struct at the same time, which
usually leads to 'interesting' situations.
---
Index: linux-2.6/kernel/sched.c
===================================================================
--- linux-2.6.orig/kernel/sched.c
+++ linux-2.6/kernel/sched.c
@@ -2144,8 +2144,9 @@ static bool need_migrate_task(struct tas
* If the task is not on a runqueue (and not running), then
* the next wake-up will properly place the task.
*/
+ bool running = p->on_rq || p->on_cpu;
smp_rmb(); /* finish_lock_switch() */
- return p->on_rq || p->on_cpu;
+ return running;
}
/*
@@ -3416,7 +3417,7 @@ void sched_exec(void)
if (dest_cpu == smp_processor_id())
goto unlock;
- if (likely(cpu_active(dest_cpu)) && need_migrate_task(p)) {
+ if (likely(cpu_active(dest_cpu))) {
struct migration_arg arg = { p, dest_cpu };
raw_spin_unlock_irqrestore(&p->pi_lock, flags);
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists