[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <510727B2.703@intel.com>
Date: Tue, 29 Jan 2013 09:36:50 +0800
From: Alex Shi <alex.shi@...el.com>
To: Borislav Petkov <bp@...en8.de>, Mike Galbraith <efault@....de>,
torvalds@...ux-foundation.org, mingo@...hat.com,
peterz@...radead.org, tglx@...utronix.de,
akpm@...ux-foundation.org, arjan@...ux.intel.com, pjt@...gle.com,
namhyung@...nel.org, vincent.guittot@...aro.org,
gregkh@...uxfoundation.org, preeti@...ux.vnet.ibm.com,
viresh.kumar@...aro.org, linux-kernel@...r.kernel.org
Subject: Re: [patch v4 0/18] sched: simplified fork, release load avg and
power awareness scheduling
> Benchmark Version Machine Run Date
> AIM Multiuser Benchmark - Suite VII "1.1" performance Jan 28 08:09:20 2013
>
> Tasks Jobs/Min JTI Real CPU Jobs/sec/task
> 1 438.8 100 13.8 3.8 7.3135
> 5 2634.8 99 11.5 7.2 8.7826
> 10 5396.3 99 11.2 11.4 8.9938
> 20 10725.7 99 11.3 24.0 8.9381
> 40 20183.2 99 12.0 38.5 8.4097
> 80 35620.9 99 13.6 71.4 7.4210
> 160 57203.5 98 16.9 137.8 5.9587
> 320 81995.8 98 23.7 271.3 4.2706
>
> then the above no_node-load_balance thing suffers a small-ish dip at 320
> tasks, yeah.
>
> And AFAICR, the effect of disabling boosting will be visible in the
> small count tasks cases anyway because if you saturate the cores with
> tasks, the boosting algorithms tend to get the box out of boosting for
> the simple reason that the power/perf headroom simply disappears due to
> the SOC being busy.
Sure. and according to the context of serial email. guess this result
has boosting enabled, right?
>
>> 640 100294.8 98 38.7 570.9 2.6118
>> 1280 115998.2 97 66.9 1132.8 1.5104
>> 2560 125820.0 97 123.3 2256.6 0.8191
>
> I dunno about those. maybe this is expected with so many tasks or do we
> want to optimize that case further?
>
--
Thanks Alex
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists