[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <510726C0.6040909@intel.com>
Date: Tue, 29 Jan 2013 09:32:48 +0800
From: Alex Shi <alex.shi@...el.com>
To: Mike Galbraith <efault@....de>
CC: Borislav Petkov <bp@...en8.de>, torvalds@...ux-foundation.org,
mingo@...hat.com, peterz@...radead.org, tglx@...utronix.de,
akpm@...ux-foundation.org, arjan@...ux.intel.com, pjt@...gle.com,
namhyung@...nel.org, vincent.guittot@...aro.org,
gregkh@...uxfoundation.org, preeti@...ux.vnet.ibm.com,
viresh.kumar@...aro.org, linux-kernel@...r.kernel.org
Subject: Re: [patch v4 0/18] sched: simplified fork, release load avg and
power awareness scheduling
>> then the above no_node-load_balance thing suffers a small-ish dip at 320
>> tasks, yeah.
>
> No no, that's not restricted to one node. It's just overloaded because
> I turned balancing off at the NODE domain level.
>
>> And AFAICR, the effect of disabling boosting will be visible in the
>> small count tasks cases anyway because if you saturate the cores with
>> tasks, the boosting algorithms tend to get the box out of boosting for
>> the simple reason that the power/perf headroom simply disappears due to
>> the SOC being busy.
>>
>>> 640 100294.8 98 38.7 570.9 2.6118
>>> 1280 115998.2 97 66.9 1132.8 1.5104
>>> 2560 125820.0 97 123.3 2256.6 0.8191
>>
>> I dunno about those. maybe this is expected with so many tasks or do we
>> want to optimize that case further?
>
> When using all 4 nodes properly, that's still scaling. Here, I
Without node regular balancing, only waking balance left in
select_task_rq_fair for aim7 testing, (I just assume you used shared
workfile, most of testing is cpu density and only few exec/fork load).
Since, waking balance just happened in same llc domain. guess that is
the reason for this.
> intentionally screwed up balancing to watch the low end. High end is
> expected wreckage.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists