[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4F70C365.8020009@redhat.com>
Date: Mon, 26 Mar 2012 15:28:37 -0400
From: Rik van Riel <riel@...hat.com>
To: Peter Zijlstra <a.p.zijlstra@...llo.nl>
CC: Andrea Arcangeli <aarcange@...hat.com>,
linux-kernel@...r.kernel.org, linux-mm@...ck.org,
Hillf Danton <dhillf@...il.com>, Dan Smith <danms@...ibm.com>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...e.hu>, Paul Turner <pjt@...gle.com>,
Suresh Siddha <suresh.b.siddha@...el.com>,
Mike Galbraith <efault@....de>,
"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
Lai Jiangshan <laijs@...fujitsu.com>,
Bharata B Rao <bharata.rao@...il.com>,
Lee Schermerhorn <Lee.Schermerhorn@...com>,
Johannes Weiner <hannes@...xchg.org>
Subject: Re: [PATCH 11/39] autonuma: CPU follow memory algorithm
On 03/26/2012 02:25 PM, Peter Zijlstra wrote:
> On Mon, 2012-03-26 at 19:45 +0200, Andrea Arcangeli wrote:
>> @@ -3220,6 +3214,8 @@ need_resched:
>>
>> post_schedule(rq);
>>
>> + sched_autonuma_balance();
>> +
>> sched_preempt_enable_no_resched();
>> if (need_resched())
>> goto need_resched;
>
> I already told you, this isn't ever going to happen. You do _NOT_ put a
> for_each_online_cpu() loop in the middle of schedule().
Agreed, it looks O(N), but because every CPU will be calling
it its behaviour will be O(N^2) and has the potential to
completely break systems with a large number of CPUs.
Finding a lower overhead way of doing the balancing does not
seem like an unsurmountable problem.
> You also do not call stop_one_cpu(migration_cpu_stop) in schedule to
> force migrate the task you just scheduled to away from this cpu. That's
> retarded.
>
> Nacked-by: Peter Zijlstra<a.p.zijlstra@...llo.nl>
--
All rights reversed
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists