[<prev] [next>] [day] [month] [year] [list]
Message-ID: <F7C8A4D3A9905B45A80E4C194793FA651553C00D40@PDSMSX501.ccr.corp.intel.com>
Date: Tue, 3 Nov 2009 22:42:03 +0800
From: "Shi, Alex" <alex.shi@...el.com>
To: "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: kernel building regression on 32-rc5 kernel
I found the kernel building will have about 20%~ 30% regressions on our NHM machines. My kernel build do the following things 15 times:
make mrproper; echo \"3\">/proc/sys/vm/drop_caches; make defconfig; make -j${2timescpunumbers}
Bisect found it is due to the commitment:
commit a6151c3a5c8e1ff5a28450bc8d6a99a2a0add0a7
Author: Jens Axboe <jens.axboe@...cle.com>
Date: Wed Oct 7 20:02:57 2009 +0200
cfq-iosched: apply bool value where we return 0/1
It quite makes me confused, but after reverting this patch the performance recovered. I analyzed the vmstat and "perf record" results, But did not find something strange. Just the "perf stat" shows something different.
Perf stat output of Rc5 kernel:
850121.335684 task-clock-msecs # 1.035 CPUs
867280 context-switches # 0.001 M/sec
266319 CPU-migrations # 0.000 M/sec
76663435 page-faults # 0.090 M/sec
2697372004020 cycles # 3172.926 M/sec (scaled from 100.00%)
2779762582367 instructions # 1.031 IPC (scaled from 100.00%)
14230675659 cache-references # 16.740 M/sec (scaled from 100.00%)
3048368747 cache-misses # 3.586 M/sec (scaled from 100.00%)
821.432570507 seconds time elapsed
Perf stat output of rc5 without the commit kernel:
900054.808787 task-clock-msecs # 1.237 CPUs
845152 context-switches # 0.001 M/sec
252689 CPU-migrations # 0.000 M/sec
77495232 page-faults # 0.086 M/sec
2470286471361 cycles # 2744.596 M/sec
2684134677043 instructions # 1.087 IPC
14685367142 cache-references # 16.316 M/sec
2869381827 cache-misses # 3.188 M/sec
727.415980216 seconds time elapsed
BRG
Alex
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists