[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20081110092937.GJ22392@elte.hu>
Date: Mon, 10 Nov 2008 10:29:37 +0100
From: Ingo Molnar <mingo@...e.hu>
To: Peter Zijlstra <a.p.zijlstra@...llo.nl>,
Ken Chen <kenchen@...gle.com>
Cc: Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Mike Galbraith <efault@....de>
Subject: Re: [patch] restore sched_exec load balance heuristics
* Peter Zijlstra <a.p.zijlstra@...llo.nl> wrote:
> void sched_exec(void)
> {
> int new_cpu, this_cpu = get_cpu();
> - new_cpu = sched_balance_self(this_cpu, SD_BALANCE_EXEC);
> + struct task_group *tg;
> + long weight, eload;
> +
> + tg = task_group(current);
> + weight = current->se.load.weight;
> + eload = -effective_load(tg, this_cpu, -weight, -weight);
> +
> + new_cpu = sched_balance_self(this_cpu, SD_BALANCE_EXEC, eload);
okay, i think this will work.
it feels somewhat backwards though on a conceptual level.
There's nothing particularly special about exec-balancing: the load
picture is in equilibrium - it is in essence a rebalancing pass done
not in the scheduler tick but in a special place in the middle of
exec() where the old-task / new-task cross section is at a minimum
level.
_fork_ balancing is what is special: there we'll get a new context so
we have to take the new load into account. It's a bit like wakeup
balancing. (just done before the new task is truly woken up)
OTOH, triggering the regular busy-balance at exec() time isnt totally
straightforward either: the 'old' task is the current task so it
cannot be balanced away. We have to trigger all the active-migration
logic - which again makes exec() balancing special.
So maybe this patch is the best solution after all. Ken, does it do
the trick for your workload, when applied against v2.6.28-rc4?
You might even try to confirm that your testcase still works fine even
if you elevate the load average with +1.0 on every cpu by starting
infinite CPU eater loops on every CPU, via this bash oneliner:
for ((i=0;i<2;i++)); do while :; do :; done & done
(change the '2' to '4' if you test this on a quad, not on a dual-core
box)
the desired behavior would be for your "exec hopper" testcase to not
hop between cpus, but to stick the same CPU most of the time.
Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists