[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <870910A1-62AF-412F-A989-1FA57B715E35@parallels.com>
Date: Fri, 8 Apr 2011 14:57:24 +0400
From: Vladimir Davydov <VDavydov@...allels.com>
To: Ken Chen <kenchen@...gle.com>
CC: "a.p.zijlstra@...llo.nl" <a.p.zijlstra@...llo.nl>,
"mingo@...e.hu" <mingo@...e.hu>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: Subject: [PATCH] sched: fixed erroneous all_pinned logic.
On Apr 8, 2011, at 4:24 AM, Ken Chen wrote:
> @@ -2302,6 +2292,9 @@ static int move_tasks(
> #endif
> } while (load_moved && max_load_move > total_load_moved);
>
> + if (total_load_moved)
> + *all_pinned = 0;
> +
> return total_load_moved > 0;
> }
>
> @@ -3300,7 +3293,7 @@ static int load_balance(
> struct sched_domain *sd, enum cpu_idle_type idle,
> int *balance)
> {
> - int ld_moved, all_pinned = 0, active_balance = 0;
> + int ld_moved, all_pinned = 1, active_balance = 0;
> struct sched_group *group;
> unsigned long imbalance;
> struct rq *busiest;
As far as I understand, this patch sets the all_pinned flag if and only if we fail to move any tasks during the load balance. However, the migration can fail because e.g. all tasks are cache hot on their cpus (can_migrate_task() returns 0 in this case), and in this case we shouldn't treat all tasks as cpu bound, should we?--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists