[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20071101060638.GA3134@linux.vnet.ibm.com>
Date: Thu, 1 Nov 2007 11:36:38 +0530
From: Srivatsa Vaddagiri <vatsa@...ux.vnet.ibm.com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Ingo Molnar <mingo@...e.hu>,
linux-kernel <linux-kernel@...r.kernel.org>
Subject: Re: sched: fix new task startup crash
On Wed, Oct 31, 2007 at 05:00:59PM +0100, Peter Zijlstra wrote:
> Hi,
Hi Peter,
> Commit: b9dca1e0fcb696716840a3bc8f20a6941b484dbf
>
> seems to me that by calling enqueue_fair_task() from task_new_fair() is
> wrong. The wakeup=1 in enqueue_fair_task() will cause all non-top
s/non-top/non-task?
> sched_entities to be re-positioned by place_entity().
yes, provided they weren't on the tree before (i.e the task being
enqueued is the first task of that group on that particular cpu).
If a group doesn't have any tasks on a particular cpu, then the group's
sched entity object for that cpu will not be present in a cfs_rq
(tg->se[i]->on_rq = 0). Subsequently, after some time T, when a task gets
added to the group's cfs_rq on that cpu, then the group sched entity
object also gets added to the cfs_rq.
This time T is effectively the sleep time for the group sched entity
object, when it was sleeping waiting for tasks to get added on that cpu.
Hence enqueue of group level sched entity objects is always treated as
if they woke up from sleep (and be repositioned in the tree via
place_entity, rather than go back to the same position it was (se->vruntime)
when it went to sleep).
That sounds correct to me. If we don't treat enqueue of group level
entities as "wakeup from sleep", then a group that gets a task added
on a cpu after long time, will "hog" the cpu trying to catch up for all
lost time?
> Although the current implementation thereof seems to avoid doing
> something horrible.
--
Regards,
vatsa
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists