[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20130215074538.GA25845@lge.com>
Date: Fri, 15 Feb 2013 16:45:38 +0900
From: Joonsoo Kim <iamjoonsoo.kim@....com>
To: Steven Rostedt <rostedt@...dmis.org>
Cc: LKML <linux-kernel@...r.kernel.org>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Ingo Molnar <mingo@...nel.org>,
Peter Zijlstra <peterz@...radead.org>,
Thomas Gleixner <tglx@...utronix.de>,
Paul Turner <pjt@...gle.com>,
Frederic Weisbecker <fweisbec@...il.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Mike Galbraith <efault@....de>,
Arnaldo Carvalho de Melo <acme@...radead.org>,
Clark Williams <clark@...hat.com>,
Andrew Theurer <habanero@...ibm.com>
Subject: Re: [RFC] sched: The removal of idle_balance()
Hello, Steven.
On Fri, Feb 15, 2013 at 01:13:39AM -0500, Steven Rostedt wrote:
> Performance counter stats for '/work/c/hackbench 500' (100 runs):
>
> 199820.045583 task-clock # 8.016 CPUs utilized ( +- 5.29% ) [100.00%]
> 3,594,264 context-switches # 0.018 M/sec ( +- 5.94% ) [100.00%]
> 352,240 cpu-migrations # 0.002 M/sec ( +- 3.31% ) [100.00%]
> 1,006,732 page-faults # 0.005 M/sec ( +- 0.56% )
> 293,801,912,874 cycles # 1.470 GHz ( +- 4.20% ) [100.00%]
> 261,808,125,109 stalled-cycles-frontend # 89.11% frontend cycles idle ( +- 4.38% ) [100.00%]
> <not supported> stalled-cycles-backend
> 135,521,344,089 instructions # 0.46 insns per cycle
> # 1.93 stalled cycles per insn ( +- 4.37% ) [100.00%]
> 26,198,116,586 branches # 131.109 M/sec ( +- 4.59% ) [100.00%]
> 115,326,812 branch-misses # 0.44% of all branches ( +- 4.12% )
>
> 24.929136087 seconds time elapsed ( +- 5.31% )
>
> Performance counter stats for '/work/c/hackbench 500' (100 runs):
>
> 98258.962617 task-clock # 7.998 CPUs utilized ( +- 12.12% ) [100.00%]
> 2,572,651 context-switches # 0.026 M/sec ( +- 9.35% ) [100.00%]
> 224,004 cpu-migrations # 0.002 M/sec ( +- 5.01% ) [100.00%]
> 913,813 page-faults # 0.009 M/sec ( +- 0.71% )
> 215,927,081,108 cycles # 2.198 GHz ( +- 5.48% ) [100.00%]
> 189,246,626,321 stalled-cycles-frontend # 87.64% frontend cycles idle ( +- 6.07% ) [100.00%]
> <not supported> stalled-cycles-backend
> 102,965,954,824 instructions # 0.48 insns per cycle
> # 1.84 stalled cycles per insn ( +- 5.40% ) [100.00%]
> 19,280,914,558 branches # 196.226 M/sec ( +- 5.89% ) [100.00%]
> 87,284,617 branch-misses # 0.45% of all branches ( +- 5.06% )
>
> 12.285025160 seconds time elapsed ( +- 12.14% )
IMHO, cycles is somewhat strange.
Why one is 1.470 GHz, other is 2.198 GHz?
In my quick test, I get below result.
- Before Patch
Permance counter stats for 'perf bench sched messaging -g 300' (10 runs):
40847.488740 task-clock # 3.232 CPUs utilized ( +- 1.24% )
511,070 context-switches # 0.013 M/sec ( +- 7.28% )
117,882 cpu-migrations # 0.003 M/sec ( +- 5.14% )
1,360,501 page-faults # 0.033 M/sec ( +- 0.12% )
118,534,394,180 cycles # 2.902 GHz ( +- 1.23% ) [50.70%]
<not supported> stalled-cycles-frontend
<not supported> stalled-cycles-backend
46,217,340,271 instructions # 0.39 insns per cycle ( +- 0.56% ) [76.93%]
8,592,447,548 branches # 210.354 M/sec ( +- 0.75% ) [75.50%]
273,367,481 branch-misses # 3.18% of all branches ( +- 0.26% ) [75.49%]
12.639049245 seconds time elapsed ( +- 2.29% )
- After Patch
Performance counter stats for 'perf bench sched messaging -g 300' (10 runs):
42053.008632 task-clock # 2.932 CPUs utilized ( +- 0.91% )
672,759 context-switches # 0.016 M/sec ( +- 2.76% )
83,374 cpu-migrations # 0.002 M/sec ( +- 4.46% )
1,362,900 page-faults # 0.032 M/sec ( +- 0.20% )
121,457,601,848 cycles # 2.888 GHz ( +- 0.93% ) [50.75%]
<not supported> stalled-cycles-frontend
<not supported> stalled-cycles-backend
47,854,828,552 instructions # 0.39 insns per cycle ( +- 0.36% ) [77.09%]
8,981,553,714 branches # 213.577 M/sec ( +- 0.42% ) [75.41%]
274,229,438 branch-misses # 3.05% of all branches ( +- 0.20% ) [75.44%]
14.340330678 seconds time elapsed ( +- 1.79% )
Thanks.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists