[<prev] [next>] [day] [month] [year] [list]
Message-ID: <87d1xojkpp.fsf@yhuang-dev.intel.com>
Date: Sat, 12 Sep 2015 13:00:18 +0800
From: kernel test robot <ying.huang@...el.com>
TO: Dave Hansen <dave.hansen@...ux.intel.com>
CC: Linus Torvalds <torvalds@...ux-foundation.org>
Subject: [lkp] [fs/notify] 7c49b86164: 19.5% will-it-scale.per_process_ops
FYI, we noticed the below changes on
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git master
commit 7c49b8616460ebb12ee56d80d1abfbc20b6f3cbb ("fs/notify: optimize inotify/fsnotify code for unwatched files")
=========================================================================================
tbox_group/testcase/rootfs/kconfig/compiler/cpufreq_governor/test:
brickland1/will-it-scale/debian-x86_64-2015-02-07.cgz/x86_64-rhel/gcc-4.9/performance/eventfd1
commit:
031e29b5877f31676739dc2f847d04c2c0732034
7c49b8616460ebb12ee56d80d1abfbc20b6f3cbb
031e29b5877f3167 7c49b8616460ebb12ee56d80d1
---------------- --------------------------
%stddev %change %stddev
\ | \
1198501 ± 0% +19.5% 1432797 ± 0% will-it-scale.per_process_ops
993833 ± 0% +15.6% 1149039 ± 0% will-it-scale.per_thread_ops
0.30 ± 0% -6.0% 0.29 ± 1% will-it-scale.scalability
6563 ± 0% -3.0% 6365 ± 0% will-it-scale.time.system_time
1370 ± 0% +14.5% 1569 ± 0% will-it-scale.time.user_time
224936 ± 2% -23.5% 172104 ± 4% cpuidle.C1-IVT-4S.usage
1036 ± 15% +27.4% 1319 ± 10% numa-meminfo.node3.PageTables
259.25 ± 15% +27.4% 330.25 ± 10% numa-vmstat.node3.nr_page_table_pages
1370 ± 0% +14.5% 1569 ± 0% time.user_time
4081 ± 1% -10.6% 3648 ± 2% vmstat.system.cs
430497 ± 2% -14.6% 367840 ± 3% latency_stats.hits.pipe_wait.pipe_read.__vfs_read.vfs_read.SyS_read.entry_SYSCALL_64_fastpath
6803 ± 34% +174.1% 18648 ± 61% latency_stats.sum.pipe_wait.pipe_write.__vfs_write.vfs_write.SyS_write.entry_SYSCALL_64_fastpath
759.27 ± 0% +15.2% 874.88 ± 0% turbostat.CorWatt
773.06 ± 0% +15.0% 888.69 ± 0% turbostat.PkgWatt
0.84 ± 1% +47.0% 1.24 ± 2% perf-profile.cpu-cycles.___might_sleep.__might_sleep.__might_fault.eventfd_read.__vfs_read
1.64 ± 2% +29.4% 2.11 ± 5% perf-profile.cpu-cycles.__fget_light.sys_read.entry_SYSCALL_64_fastpath
1.65 ± 4% +30.2% 2.15 ± 7% perf-profile.cpu-cycles.__fget_light.sys_write.entry_SYSCALL_64_fastpath
1.76 ± 1% +37.6% 2.42 ± 0% perf-profile.cpu-cycles.__might_fault.eventfd_read.__vfs_read.vfs_read.sys_read
1.60 ± 3% +24.0% 1.99 ± 3% perf-profile.cpu-cycles.__might_fault.eventfd_write.__vfs_write.vfs_write.sys_write
1.43 ± 1% +49.7% 2.14 ± 0% perf-profile.cpu-cycles.__might_sleep.__might_fault.eventfd_read.__vfs_read.vfs_read
1.06 ± 2% +24.5% 1.32 ± 4% perf-profile.cpu-cycles.__might_sleep.__might_fault.eventfd_write.__vfs_write.vfs_write
0.86 ± 4% +40.0% 1.21 ± 1% perf-profile.cpu-cycles.__put_user_8.__vfs_read.vfs_read.sys_read.entry_SYSCALL_64_fastpath
6.93 ± 1% -100.0% 0.00 ± -1% perf-profile.cpu-cycles.__srcu_read_lock.fsnotify.vfs_write.sys_write.entry_SYSCALL_64_fastpath
5.96 ± 1% -100.0% 0.00 ± -1% perf-profile.cpu-cycles.__srcu_read_unlock.fsnotify.vfs_write.sys_write.entry_SYSCALL_64_fastpath
12.14 ± 1% +21.3% 14.72 ± 0% perf-profile.cpu-cycles.__vfs_read.vfs_read.sys_read.entry_SYSCALL_64_fastpath
12.68 ± 0% +32.9% 16.85 ± 1% perf-profile.cpu-cycles.__vfs_write.vfs_write.sys_write.entry_SYSCALL_64_fastpath
3.01 ± 2% +19.7% 3.60 ± 1% perf-profile.cpu-cycles._raw_spin_lock_irq.eventfd_ctx_read.eventfd_read.__vfs_read.vfs_read
2.65 ± 0% +55.2% 4.11 ± 2% perf-profile.cpu-cycles._raw_spin_lock_irq.eventfd_write.__vfs_write.vfs_write.sys_write
5.19 ± 22% +240.5% 17.66 ± 10% perf-profile.cpu-cycles.apic_timer_interrupt
1.73 ± 1% -100.0% 0.00 ± -1% perf-profile.cpu-cycles.apic_timer_interrupt.fsnotify.vfs_write.sys_write.entry_SYSCALL_64_fastpath
2.59 ± 1% +32.3% 3.43 ± 2% perf-profile.cpu-cycles.copy_user_enhanced_fast_string.__vfs_write.vfs_write.sys_write.entry_SYSCALL_64_fastpath
6.75 ± 1% +21.3% 8.19 ± 3% perf-profile.cpu-cycles.entry_SYSCALL_64
7.04 ± 1% +20.9% 8.52 ± 2% perf-profile.cpu-cycles.entry_SYSCALL_64_after_swapgs
5.32 ± 2% +20.0% 6.38 ± 1% perf-profile.cpu-cycles.eventfd_ctx_read.eventfd_read.__vfs_read.vfs_read.sys_read
9.34 ± 0% +21.4% 11.34 ± 0% perf-profile.cpu-cycles.eventfd_read.__vfs_read.vfs_read.sys_read.entry_SYSCALL_64_fastpath
6.92 ± 1% +31.5% 9.10 ± 2% perf-profile.cpu-cycles.eventfd_write.__vfs_write.vfs_write.sys_write.entry_SYSCALL_64_fastpath
0.99 ± 2% +50.9% 1.49 ± 4% perf-profile.cpu-cycles.file_has_perm.selinux_file_permission.security_file_permission.rw_verify_area.vfs_read
0.99 ± 2% +17.9% 1.17 ± 3% perf-profile.cpu-cycles.file_has_perm.selinux_file_permission.security_file_permission.rw_verify_area.vfs_write
1.76 ± 2% -37.5% 1.10 ± 2% perf-profile.cpu-cycles.fsnotify.security_file_permission.rw_verify_area.vfs_read.sys_read
1.29 ± 2% -31.4% 0.89 ± 2% perf-profile.cpu-cycles.fsnotify.vfs_read.sys_read.entry_SYSCALL_64_fastpath
17.38 ± 0% -94.7% 0.91 ± 3% perf-profile.cpu-cycles.fsnotify.vfs_write.sys_write.entry_SYSCALL_64_fastpath
5.53 ± 25% +213.2% 17.30 ± 11% perf-profile.cpu-cycles.hrtimer_interrupt.local_apic_timer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt
1.64 ± 4% -100.0% 0.00 ± -1% perf-profile.cpu-cycles.hrtimer_interrupt.local_apic_timer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt.fsnotify
5.54 ± 23% +212.5% 17.31 ± 11% perf-profile.cpu-cycles.local_apic_timer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt
1.64 ± 4% -100.0% 0.00 ± -1% perf-profile.cpu-cycles.local_apic_timer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt.fsnotify.vfs_write
9.47 ± 0% +11.8% 10.59 ± 0% perf-profile.cpu-cycles.rw_verify_area.vfs_read.sys_read.entry_SYSCALL_64_fastpath
5.11 ± 0% +20.9% 6.18 ± 1% perf-profile.cpu-cycles.rw_verify_area.vfs_write.sys_write.entry_SYSCALL_64_fastpath
3.94 ± 1% +18.5% 4.67 ± 0% perf-profile.cpu-cycles.security_file_permission.rw_verify_area.vfs_write.sys_write.entry_SYSCALL_64_fastpath
2.66 ± 1% +36.7% 3.63 ± 1% perf-profile.cpu-cycles.selinux_file_permission.security_file_permission.rw_verify_area.vfs_read.sys_read
2.69 ± 1% +20.2% 3.23 ± 1% perf-profile.cpu-cycles.selinux_file_permission.security_file_permission.rw_verify_area.vfs_write.sys_write
5.43 ± 26% +219.9% 17.37 ± 11% perf-profile.cpu-cycles.smp_apic_timer_interrupt.apic_timer_interrupt
1.66 ± 5% -100.0% 0.00 ± -1% perf-profile.cpu-cycles.smp_apic_timer_interrupt.apic_timer_interrupt.fsnotify.vfs_write.sys_write
31.47 ± 0% +17.1% 36.85 ± 0% perf-profile.cpu-cycles.sys_read.entry_SYSCALL_64_fastpath
45.56 ± 0% -22.9% 35.15 ± 0% perf-profile.cpu-cycles.sys_write.entry_SYSCALL_64_fastpath
26.70 ± 0% +15.1% 30.72 ± 0% perf-profile.cpu-cycles.vfs_read.sys_read.entry_SYSCALL_64_fastpath
40.55 ± 0% -29.7% 28.49 ± 1% perf-profile.cpu-cycles.vfs_write.sys_write.entry_SYSCALL_64_fastpath
80422 ± 6% -12.1% 70685 ± 10% sched_debug.cfs_rq[0]:/.exec_clock
929.25 ± 1% +30.2% 1210 ± 14% sched_debug.cfs_rq[0]:/.tg_load_avg
919.25 ± 1% +29.1% 1187 ± 13% sched_debug.cfs_rq[10]:/.tg_load_avg
24202 ± 11% +17.9% 28527 ± 5% sched_debug.cfs_rq[118]:/.exec_clock
916.50 ± 1% +29.2% 1183 ± 13% sched_debug.cfs_rq[11]:/.tg_load_avg
916.75 ± 1% +29.0% 1183 ± 13% sched_debug.cfs_rq[12]:/.tg_load_avg
917.25 ± 1% +28.7% 1180 ± 14% sched_debug.cfs_rq[13]:/.tg_load_avg
917.25 ± 0% +28.7% 1180 ± 14% sched_debug.cfs_rq[14]:/.tg_load_avg
915.25 ± 1% +29.0% 1180 ± 14% sched_debug.cfs_rq[15]:/.tg_load_avg
916.00 ± 1% +28.9% 1181 ± 14% sched_debug.cfs_rq[16]:/.tg_load_avg
915.00 ± 1% +29.1% 1181 ± 14% sched_debug.cfs_rq[17]:/.tg_load_avg
914.75 ± 1% +29.0% 1180 ± 14% sched_debug.cfs_rq[18]:/.tg_load_avg
914.25 ± 1% +29.1% 1180 ± 14% sched_debug.cfs_rq[19]:/.tg_load_avg
929.25 ± 1% +30.3% 1210 ± 14% sched_debug.cfs_rq[1]:/.tg_load_avg
914.00 ± 1% +29.1% 1180 ± 13% sched_debug.cfs_rq[20]:/.tg_load_avg
915.25 ± 1% +29.0% 1180 ± 13% sched_debug.cfs_rq[21]:/.tg_load_avg
916.25 ± 1% +28.8% 1180 ± 13% sched_debug.cfs_rq[22]:/.tg_load_avg
914.00 ± 1% +29.0% 1179 ± 14% sched_debug.cfs_rq[23]:/.tg_load_avg
912.75 ± 1% +29.1% 1178 ± 13% sched_debug.cfs_rq[24]:/.tg_load_avg
996.75 ± 14% +16.2% 1158 ± 12% sched_debug.cfs_rq[25]:/.tg_load_avg
928.75 ± 1% +30.3% 1210 ± 14% sched_debug.cfs_rq[2]:/.tg_load_avg
26957 ± 8% +16.0% 31280 ± 10% sched_debug.cfs_rq[33]:/.exec_clock
1.75 ± 47% +314.3% 7.25 ± 85% sched_debug.cfs_rq[39]:/.load_avg
1.75 ± 47% +314.3% 7.25 ± 85% sched_debug.cfs_rq[39]:/.tg_load_avg_contrib
925.00 ± 1% +30.8% 1210 ± 14% sched_debug.cfs_rq[3]:/.tg_load_avg
5.00 ± 86% -100.0% 0.00 ± -1% sched_debug.cfs_rq[46]:/.nr_spread_over
924.50 ± 1% +30.4% 1206 ± 14% sched_debug.cfs_rq[4]:/.tg_load_avg
170.75 ± 1% +36.2% 232.50 ± 28% sched_debug.cfs_rq[59]:/.util_avg
921.75 ± 1% +28.9% 1188 ± 13% sched_debug.cfs_rq[5]:/.tg_load_avg
88047 ± 6% +11.4% 98122 ± 7% sched_debug.cfs_rq[60]:/.exec_clock
768.25 ± 12% -9.2% 697.50 ± 6% sched_debug.cfs_rq[63]:/.util_avg
758.25 ± 10% -11.2% 673.50 ± 0% sched_debug.cfs_rq[66]:/.util_avg
922.00 ± 1% +28.9% 1188 ± 13% sched_debug.cfs_rq[6]:/.tg_load_avg
922.25 ± 1% +28.9% 1188 ± 13% sched_debug.cfs_rq[7]:/.tg_load_avg
922.00 ± 1% +28.9% 1188 ± 13% sched_debug.cfs_rq[8]:/.tg_load_avg
921.25 ± 1% +29.0% 1188 ± 13% sched_debug.cfs_rq[9]:/.tg_load_avg
87563 ± 5% -11.3% 77697 ± 9% sched_debug.cpu#0.nr_load_updates
12103 ± 13% -17.6% 9976 ± 7% sched_debug.cpu#1.sched_count
5785 ± 14% -20.7% 4586 ± 7% sched_debug.cpu#1.ttwu_count
894319 ± 1% +6.3% 951023 ± 5% sched_debug.cpu#10.avg_idle
571.00 ± 35% +143.3% 1389 ± 74% sched_debug.cpu#10.ttwu_count
821.25 ± 6% +157.4% 2114 ± 46% sched_debug.cpu#100.nr_switches
-0.25 ±-435% -1700.0% 4.00 ± 46% sched_debug.cpu#100.nr_uninterruptible
824.50 ± 6% +156.6% 2115 ± 46% sched_debug.cpu#100.sched_count
346.75 ± 6% +158.5% 896.50 ± 48% sched_debug.cpu#100.sched_goidle
380.50 ± 15% +174.2% 1043 ± 56% sched_debug.cpu#100.ttwu_count
216.25 ± 4% +237.8% 730.50 ± 59% sched_debug.cpu#100.ttwu_local
1262 ± 46% +109.0% 2637 ± 69% sched_debug.cpu#103.sched_count
-1.50 ±-152% -250.0% 2.25 ± 36% sched_debug.cpu#104.nr_uninterruptible
747.25 ± 34% +494.1% 4439 ± 53% sched_debug.cpu#106.ttwu_count
1948 ± 33% +204.4% 5931 ± 70% sched_debug.cpu#109.nr_switches
1951 ± 33% +204.1% 5934 ± 70% sched_debug.cpu#109.sched_count
1845 ± 19% -29.8% 1295 ± 20% sched_debug.cpu#11.nr_switches
1861 ± 19% -23.7% 1420 ± 16% sched_debug.cpu#11.sched_count
796.50 ± 23% -34.3% 523.50 ± 19% sched_debug.cpu#11.sched_goidle
539.50 ± 32% -50.5% 267.25 ± 38% sched_debug.cpu#11.ttwu_local
1714 ± 97% -81.1% 323.75 ± 43% sched_debug.cpu#110.ttwu_local
534.25 ± 7% +136.5% 1263 ± 56% sched_debug.cpu#111.ttwu_count
1181 ± 62% -61.6% 453.50 ± 16% sched_debug.cpu#12.ttwu_count
744.75 ± 86% -70.9% 216.75 ± 28% sched_debug.cpu#12.ttwu_local
2.50 ± 60% -260.0% -4.00 ±-88% sched_debug.cpu#14.nr_uninterruptible
17797 ± 83% -89.6% 1849 ± 7% sched_debug.cpu#15.nr_switches
8746 ± 84% -91.0% 788.00 ± 8% sched_debug.cpu#15.sched_goidle
1349 ±116% -77.4% 305.00 ± 45% sched_debug.cpu#19.ttwu_local
1473 ± 39% +61.5% 2379 ± 10% sched_debug.cpu#2.ttwu_count
2393 ± 42% -34.6% 1565 ± 11% sched_debug.cpu#27.nr_switches
2457 ± 39% -34.8% 1603 ± 10% sched_debug.cpu#27.sched_count
1011 ± 49% -42.3% 583.50 ± 23% sched_debug.cpu#27.ttwu_count
552.75 ± 80% -62.6% 207.00 ± 13% sched_debug.cpu#27.ttwu_local
276.75 ± 33% +81.9% 503.50 ± 53% sched_debug.cpu#29.ttwu_local
2439 ± 26% +40.0% 3416 ± 21% sched_debug.cpu#3.nr_switches
981.50 ± 5% +50.5% 1476 ± 18% sched_debug.cpu#3.ttwu_count
535.50 ± 22% +56.7% 839.00 ± 24% sched_debug.cpu#3.ttwu_local
1107 ± 81% +125.6% 2498 ± 58% sched_debug.cpu#30.ttwu_local
4480 ± 73% -66.8% 1487 ± 15% sched_debug.cpu#35.ttwu_count
-0.25 ±-714% +1500.0% -4.00 ±-46% sched_debug.cpu#36.nr_uninterruptible
8152 ± 59% -54.6% 3699 ± 15% sched_debug.cpu#37.sched_count
1235 ± 55% -50.9% 606.50 ± 10% sched_debug.cpu#37.ttwu_local
3522 ± 15% +219.3% 11245 ± 52% sched_debug.cpu#39.nr_switches
3573 ± 13% +215.2% 11261 ± 52% sched_debug.cpu#39.sched_count
1416 ± 11% +286.4% 5471 ± 53% sched_debug.cpu#39.sched_goidle
0.25 ±519% -2600.0% -6.25 ±-34% sched_debug.cpu#5.nr_uninterruptible
2469 ± 99% -77.9% 544.75 ± 29% sched_debug.cpu#50.ttwu_local
3.25 ± 82% -76.9% 0.75 ±197% sched_debug.cpu#55.nr_uninterruptible
1323 ± 36% -43.1% 752.75 ± 36% sched_debug.cpu#55.ttwu_local
1770 ± 16% +168.3% 4748 ± 27% sched_debug.cpu#59.sched_goidle
2345 ± 18% -27.8% 1693 ± 17% sched_debug.cpu#6.nr_switches
2401 ± 16% -26.2% 1771 ± 14% sched_debug.cpu#6.sched_count
102141 ± 3% +10.4% 112728 ± 6% sched_debug.cpu#60.nr_load_updates
821.75 ± 48% +139.5% 1968 ± 20% sched_debug.cpu#62.nr_switches
830.00 ± 48% +189.0% 2399 ± 37% sched_debug.cpu#62.sched_count
296.75 ± 50% +160.1% 771.75 ± 18% sched_debug.cpu#62.sched_goidle
499.00 ± 18% +128.5% 1140 ± 50% sched_debug.cpu#65.nr_switches
504.75 ± 17% +126.9% 1145 ± 50% sched_debug.cpu#65.sched_count
172.75 ± 9% +194.8% 509.25 ± 53% sched_debug.cpu#65.sched_goidle
220.50 ± 30% +125.4% 497.00 ± 55% sched_debug.cpu#65.ttwu_count
100.00 ± 17% +184.0% 284.00 ± 62% sched_debug.cpu#65.ttwu_local
1.75 ±109% -142.9% -0.75 ±-110% sched_debug.cpu#67.nr_uninterruptible
100.75 ± 10% +36.7% 137.75 ± 23% sched_debug.cpu#68.ttwu_local
629.00 ± 49% +86.0% 1170 ± 43% sched_debug.cpu#7.ttwu_count
478.25 ± 11% +39.7% 668.00 ± 11% sched_debug.cpu#71.nr_switches
484.75 ± 11% +38.9% 673.25 ± 11% sched_debug.cpu#71.sched_count
185.25 ± 11% +41.8% 262.75 ± 15% sched_debug.cpu#71.sched_goidle
188.00 ± 14% +33.0% 250.00 ± 13% sched_debug.cpu#71.ttwu_count
437.25 ± 13% +586.4% 3001 ± 50% sched_debug.cpu#76.ttwu_count
198.50 ± 4% -7.8% 183.00 ± 3% sched_debug.cpu#79.ttwu_local
1252 ± 41% +119.3% 2746 ± 35% sched_debug.cpu#8.nr_switches
1350 ± 37% +109.6% 2830 ± 36% sched_debug.cpu#8.sched_count
510.50 ± 51% +140.7% 1228 ± 39% sched_debug.cpu#8.sched_goidle
199.75 ± 51% +125.9% 451.25 ± 53% sched_debug.cpu#8.ttwu_local
403.75 ± 20% -24.4% 305.25 ± 3% sched_debug.cpu#80.sched_goidle
2.25 ± 19% -77.8% 0.50 ±173% sched_debug.cpu#84.nr_uninterruptible
483.00 ± 91% -64.5% 171.25 ± 5% sched_debug.cpu#87.ttwu_local
393.50 ± 16% -21.3% 309.75 ± 9% sched_debug.cpu#89.sched_goidle
-0.75 ±-57% -266.7% 1.25 ±118% sched_debug.cpu#94.nr_uninterruptible
0.75 ±197% +533.3% 4.75 ± 31% sched_debug.cpu#95.nr_uninterruptible
620.50 ± 29% -32.3% 420.00 ± 7% sched_debug.cpu#95.sched_goidle
890.00 ± 7% +179.1% 2483 ± 77% sched_debug.cpu#98.nr_switches
893.25 ± 7% +178.3% 2486 ± 77% sched_debug.cpu#98.sched_count
373.25 ± 9% +206.2% 1142 ± 85% sched_debug.cpu#98.sched_goidle
228.00 ± 4% +70.1% 387.75 ± 50% sched_debug.cpu#98.ttwu_local
371.25 ± 8% +293.3% 1460 ±103% sched_debug.cpu#99.sched_goidle
brickland1: Brickland Ivy Bridge-EX
Memory: 128G
turbostat.PkgWatt
1000 ++-------------------------------------------------------------------+
900 O+ O O O O O O O O O |
| O O O O O O O O O O O |
800 ++ *..*...*..*..*..*..*..*..*...*..*..*..*..*..*..*..*...*..*..*..*
700 ++ : |
| : |
600 ++ : |
500 ++ : |
400 ++ : |
| : |
300 ++ : |
200 ++ : |
| : |
100 ++ : |
0 *+-*-----------------------------------------------------------------+
turbostat.CorWatt
900 O+----O---O--O--O--O-----O---O--O--O--O--O---O--O--O--O---------O-----+
| O O O O |
800 ++ *...*..*..*..*..*..*...*..*..*..*..*...*..*..*..*..*..*...*..*..*
700 ++ : |
| : |
600 ++ : |
500 ++ : |
| : |
400 ++ : |
300 ++ : |
| : |
200 ++ : |
100 ++ : |
| : |
0 *+-*------------------------------------------------------------------+
will-it-scale.per_process_ops
1.6e+06 ++----------------------------------------------------------------+
O O O O O O O O O O O O O |
1.4e+06 ++ O O O O O O O O |
| .*..|
1.2e+06 ++ *..*..*..*..*..*..*..*..*..*..*..*..*..*..*..*..*..*..*. *
1e+06 ++ : |
| : |
800000 ++ : |
| : |
600000 ++ : |
400000 ++ : |
| : |
200000 ++ : |
| : |
0 *+-*--------------------------------------------------------------+
will-it-scale.per_thread_ops
1.2e+06 O+-O--O--O--------------------------------------------------------+
| O O O O O O O O O O O O O O O O O |
1e+06 ++ .*.. .*.. .*..*..*..*..*
| *..*..*..*..*..*..*..*..*..*..*. *. *..*. |
| : |
800000 ++ : |
| : |
600000 ++ : |
| : |
400000 ++ : |
| : |
| : |
200000 ++ : |
| : |
0 *+-*--------------------------------------------------------------+
will-it-scale.time.user_time
1600 O+-O--O--O---O--O--O--O--O--O--O---O--O--O-----O--O--O--O---O--O-----+
| O |
1400 ++ *..*...*..*..*..*..*..*..*...*..*..*..*..*..*.. .*...*..*..*..*
1200 ++ : *. |
| : |
1000 ++ : |
| : |
800 ++ : |
| : |
600 ++ : |
400 ++ : |
| : |
200 ++ : |
| : |
0 *+-*-----------------------------------------------------------------+
will-it-scale.time.system_time
7000 ++-------------------------------------------------------------------+
O O O..O...O..O..O..O..O..O..O...O..O..O..O..O..O..O..O...O..O..*..*
6000 ++ : |
| : |
5000 ++ : |
| : |
4000 ++ : |
| : |
3000 ++ : |
| : |
2000 ++ : |
| : |
1000 ++ : |
| : |
0 *+-*-----------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
To reproduce:
git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Ying Huang
View attachment "job.yaml" of type "text/plain" (3212 bytes)
View attachment "reproduce" of type "text/plain" (8942 bytes)
Powered by blists - more mailing lists