[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1302263357.9086.138.camel@twins>
Date: Fri, 08 Apr 2011 13:49:17 +0200
From: Peter Zijlstra <a.p.zijlstra@...llo.nl>
To: Vladimir Davydov <VDavydov@...allels.com>
Cc: Ken Chen <kenchen@...gle.com>, "mingo@...e.hu" <mingo@...e.hu>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: Subject: [PATCH] sched: fixed erroneous all_pinned logic.
On Fri, 2011-04-08 at 14:57 +0400, Vladimir Davydov wrote:
>
> On Apr 8, 2011, at 4:24 AM, Ken Chen wrote:
>
> > @@ -2302,6 +2292,9 @@ static int move_tasks(
> > #endif
> > } while (load_moved && max_load_move > total_load_moved);
> >
> > + if (total_load_moved)
> > + *all_pinned = 0;
> > +
> > return total_load_moved > 0;
> > }
> >
> > @@ -3300,7 +3293,7 @@ static int load_balance(
> > struct sched_domain *sd, enum cpu_idle_type idle,
> > int *balance)
> > {
> > - int ld_moved, all_pinned = 0, active_balance = 0;
> > + int ld_moved, all_pinned = 1, active_balance = 0;
> > struct sched_group *group;
> > unsigned long imbalance;
> > struct rq *busiest;
>
> As far as I understand, this patch sets the all_pinned flag if and
> only if we fail to move any tasks during the load balance. However,
> the migration can fail because e.g. all tasks are cache hot on their
> cpus (can_migrate_task() returns 0 in this case), and in this case we
> shouldn't treat all tasks as cpu bound, should we?--
Hmm, you've got a good point there... (that'll teach me to read email in
date order).
Ken would it work to only push the all_pinned = 1 higher and not also
the all_pinned = 0?
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists