[<prev] [next>] [day] [month] [year] [list]
Message-ID: <87vbaivy7g.fsf@yhuang-dev.intel.com>
Date: Thu, 08 Oct 2015 11:25:07 +0800
From: kernel test robot <ying.huang@...ux.intel.com>
TO: Peng Tao <tao.peng@...marydata.com>
CC: Trond Myklebust <trond.myklebust@...marydata.com>
Subject: [lkp] [nfs] 048883e0b9: No primary result change, -70.4%
fsmark.time.involuntary_context_switches
FYI, we noticed the below changes on
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git master
commit 048883e0b934d9a5103d40e209cb14b7f33d2933 ("nfs: fix pg_test page count calculation")
=========================================================================================
tbox_group/testcase/rootfs/kconfig/compiler/iterations/nr_threads/disk/fs/fs2/filesize/test_size/sync_method/nr_directories/nr_files_per_directory:
lkp-ws02/fsmark/debian-x86_64-2015-02-07.cgz/x86_64-rhel/gcc-4.9/1x/32t/1HDD/xfs/nfsv4/16MB/60G/fsyncBeforeClose/16d/256fpd
commit:
a41cbe86df3afbc82311a1640e20858c0cd7e065
048883e0b934d9a5103d40e209cb14b7f33d2933
a41cbe86df3afbc8 048883e0b934d9a5103d40e209
---------------- --------------------------
%stddev %change %stddev
\ | \
261986 ± 0% -70.4% 77543 ± 0% fsmark.time.involuntary_context_switches
272406 ± 0% -17.9% 223687 ± 0% fsmark.time.voluntary_context_switches
5443 ± 0% -38.6% 3342 ± 0% vmstat.system.cs
475248 ± 0% -50.9% 233285 ± 0% softirqs.NET_RX
157367 ± 1% -9.0% 143212 ± 0% softirqs.SCHED
261986 ± 0% -70.4% 77543 ± 0% time.involuntary_context_switches
272406 ± 0% -17.9% 223687 ± 0% time.voluntary_context_switches
248624 ± 1% +294.3% 980340 ± 0% meminfo.Active
223111 ± 2% +328.0% 954877 ± 0% meminfo.Active(file)
65657 ± 0% -13.1% 57050 ± 0% meminfo.SUnreclaim
1.34 ± 0% -5.2% 1.27 ± 0% turbostat.%Busy
5.19 ± 1% -28.5% 3.71 ± 3% turbostat.CPU%c1
12.41 ± 1% -52.3% 5.91 ± 4% turbostat.CPU%c3
14.86 ± 1% -23.3% 11.39 ± 3% turbostat.Pkg%pc3
16.35 ± 1% +41.8% 23.19 ± 1% turbostat.Pkg%pc6
2.675e+08 ± 4% -12.9% 2.329e+08 ± 7% cpuidle.C1-NHM.time
165684 ± 4% +12.4% 186205 ± 2% cpuidle.C1-NHM.usage
1.446e+08 ± 7% -75.9% 34785128 ± 12% cpuidle.C1E-NHM.time
79076 ± 1% -87.7% 9744 ± 3% cpuidle.C1E-NHM.usage
1.618e+09 ± 1% -55.5% 7.193e+08 ± 3% cpuidle.C3-NHM.time
510548 ± 0% -73.4% 135726 ± 1% cpuidle.C3-NHM.usage
1532641 ± 0% -9.8% 1382714 ± 1% cpuidle.C6-NHM.usage
119890 ± 2% +322.8% 506915 ± 5% numa-meminfo.node0.Active
107993 ± 2% +356.4% 492854 ± 5% numa-meminfo.node0.Active(file)
34025 ± 3% -9.9% 30670 ± 1% numa-meminfo.node0.SUnreclaim
128802 ± 4% +267.7% 473544 ± 6% numa-meminfo.node1.Active
115176 ± 4% +301.2% 462134 ± 6% numa-meminfo.node1.Active(file)
1217 ± 4% -20.5% 967.25 ± 22% numa-meminfo.node1.Dirty
9663 ± 24% -29.4% 6824 ± 35% numa-meminfo.node1.Mapped
31637 ± 3% -16.6% 26381 ± 1% numa-meminfo.node1.SUnreclaim
11631847 ± 2% +19.8% 13937484 ± 4% numa-numastat.node0.local_node
3950957 ± 9% +58.3% 6253584 ± 10% numa-numastat.node0.numa_foreign
11631855 ± 2% +19.8% 13937495 ± 4% numa-numastat.node0.numa_hit
4660337 ± 3% -27.1% 3398872 ± 7% numa-numastat.node0.numa_miss
4660345 ± 3% -27.1% 3398883 ± 7% numa-numastat.node0.other_node
13541675 ± 3% -24.6% 10208933 ± 9% numa-numastat.node1.local_node
4660333 ± 3% -27.1% 3398872 ± 7% numa-numastat.node1.numa_foreign
13541683 ± 3% -24.6% 10208939 ± 9% numa-numastat.node1.numa_hit
3950957 ± 9% +58.3% 6253604 ± 10% numa-numastat.node1.numa_miss
3950964 ± 9% +58.3% 6253611 ± 10% numa-numastat.node1.other_node
55777 ± 2% +328.0% 238719 ± 0% proc-vmstat.nr_active_file
16414 ± 0% -13.1% 14262 ± 0% proc-vmstat.nr_slab_unreclaimable
8605572 ± 5% +12.1% 9648367 ± 6% proc-vmstat.numa_foreign
8605590 ± 5% +12.1% 9648366 ± 6% proc-vmstat.numa_miss
8605606 ± 5% +12.1% 9648384 ± 6% proc-vmstat.numa_other
1080 ± 8% -14.5% 924.25 ± 5% proc-vmstat.numa_pages_migrated
68361 ± 5% +620.8% 492764 ± 2% proc-vmstat.pgactivate
1080 ± 8% -14.5% 924.25 ± 5% proc-vmstat.pgmigrate_success
4917430 ± 1% +8.5% 5336634 ± 2% proc-vmstat.pgscan_kswapd_dma32
2245024 ± 0% -70.9% 653056 ± 0% proc-vmstat.slabs_scanned
26997 ± 2% +356.4% 123206 ± 5% numa-vmstat.node0.nr_active_file
8505 ± 3% -9.9% 7667 ± 1% numa-vmstat.node0.nr_slab_unreclaimable
1556010 ± 11% +72.7% 2687972 ± 9% numa-vmstat.node0.numa_foreign
5466532 ± 2% +17.2% 6404739 ± 5% numa-vmstat.node0.numa_hit
5466271 ± 2% +17.2% 6404584 ± 5% numa-vmstat.node0.numa_local
2073926 ± 8% -27.8% 1497829 ± 15% numa-vmstat.node0.numa_miss
2074187 ± 8% -27.8% 1497984 ± 15% numa-vmstat.node0.numa_other
28794 ± 4% +301.2% 115526 ± 6% numa-vmstat.node1.nr_active_file
300.50 ± 4% -27.2% 218.75 ± 17% numa-vmstat.node1.nr_dirty
2415 ± 24% -29.4% 1705 ± 35% numa-vmstat.node1.nr_mapped
7909 ± 3% -16.6% 6595 ± 1% numa-vmstat.node1.nr_slab_unreclaimable
2073932 ± 8% -27.8% 1497832 ± 15% numa-vmstat.node1.numa_foreign
6395016 ± 3% -23.4% 4896548 ± 6% numa-vmstat.node1.numa_hit
6330567 ± 3% -23.7% 4832140 ± 6% numa-vmstat.node1.numa_local
1556010 ± 11% +72.7% 2687974 ± 9% numa-vmstat.node1.numa_miss
1620459 ± 10% +69.9% 2752382 ± 8% numa-vmstat.node1.numa_other
4543 ± 2% -78.3% 986.25 ± 1% slabinfo.RAW.active_objs
129.25 ± 2% -79.3% 26.75 ± 1% slabinfo.RAW.active_slabs
4672 ± 2% -78.9% 986.25 ± 1% slabinfo.RAW.num_objs
129.25 ± 2% -79.3% 26.75 ± 1% slabinfo.RAW.num_slabs
55723 ± 0% -40.0% 33420 ± 0% slabinfo.kmalloc-128.active_objs
1768 ± 0% -39.6% 1067 ± 1% slabinfo.kmalloc-128.active_slabs
56593 ± 0% -39.6% 34163 ± 1% slabinfo.kmalloc-128.num_objs
1768 ± 0% -39.6% 1067 ± 1% slabinfo.kmalloc-128.num_slabs
12134 ± 1% -7.9% 11175 ± 3% slabinfo.kmalloc-192.active_objs
12367 ± 1% -9.0% 11257 ± 2% slabinfo.kmalloc-192.num_objs
14574 ± 1% -32.2% 9875 ± 3% slabinfo.kmalloc-32.active_objs
14574 ± 1% -32.2% 9875 ± 3% slabinfo.kmalloc-32.num_objs
35364 ± 0% -62.7% 13198 ± 2% slabinfo.kmalloc-96.active_objs
854.00 ± 0% -62.5% 320.25 ± 1% slabinfo.kmalloc-96.active_slabs
35880 ± 0% -62.5% 13469 ± 1% slabinfo.kmalloc-96.num_objs
854.00 ± 0% -62.5% 320.25 ± 1% slabinfo.kmalloc-96.num_slabs
8.25 ± 69% +524.2% 51.50 ± 42% sched_debug.cfs_rq[0]:/.load
9290 ± 2% -28.3% 6664 ± 3% sched_debug.cfs_rq[10]:/.exec_clock
267277 ± 2% -9.4% 242184 ± 4% sched_debug.cfs_rq[10]:/.min_vruntime
53.25 ± 6% -24.4% 40.25 ± 2% sched_debug.cfs_rq[10]:/.nr_spread_over
-9975 ±-167% +515.1% -61355 ±-28% sched_debug.cfs_rq[10]:/.spread0
9375 ± 4% -27.8% 6766 ± 4% sched_debug.cfs_rq[11]:/.exec_clock
54.00 ± 6% -28.2% 38.75 ± 9% sched_debug.cfs_rq[11]:/.nr_spread_over
-26030 ±-80% +151.9% -65565 ±-33% sched_debug.cfs_rq[11]:/.spread0
47694 ± 8% +25.3% 59751 ± 6% sched_debug.cfs_rq[12]:/.min_vruntime
18.50 ± 14% +41.9% 26.25 ± 11% sched_debug.cfs_rq[12]:/.nr_spread_over
2556 ± 5% +62.8% 4160 ± 15% sched_debug.cfs_rq[13]:/.exec_clock
43878 ± 8% +51.8% 66592 ± 20% sched_debug.cfs_rq[13]:/.min_vruntime
2813 ± 11% +27.2% 3579 ± 5% sched_debug.cfs_rq[14]:/.exec_clock
7.25 ±173% +331.0% 31.25 ± 73% sched_debug.cfs_rq[14]:/.load
43638 ± 15% +45.7% 63601 ± 18% sched_debug.cfs_rq[14]:/.min_vruntime
5.00 ± 37% +135.0% 11.75 ± 32% sched_debug.cfs_rq[14]:/.util_avg
2665 ± 5% +39.2% 3711 ± 11% sched_debug.cfs_rq[15]:/.exec_clock
40446 ± 14% +34.6% 54455 ± 11% sched_debug.cfs_rq[15]:/.min_vruntime
2510 ± 3% +50.5% 3778 ± 9% sched_debug.cfs_rq[16]:/.exec_clock
14.00 ± 15% +69.6% 23.75 ± 9% sched_debug.cfs_rq[16]:/.nr_spread_over
-218464 ± -6% +10.4% -241264 ± -4% sched_debug.cfs_rq[16]:/.spread0
2750 ± 7% +24.8% 3432 ± 2% sched_debug.cfs_rq[17]:/.exec_clock
3203 ± 5% -28.9% 2278 ± 7% sched_debug.cfs_rq[18]:/.exec_clock
59639 ± 4% -20.3% 47534 ± 6% sched_debug.cfs_rq[18]:/.min_vruntime
-217617 ± -5% +17.6% -256020 ± -5% sched_debug.cfs_rq[18]:/.spread0
2975 ± 5% -22.7% 2301 ± 12% sched_debug.cfs_rq[19]:/.exec_clock
-223305 ± -7% +15.8% -258545 ± -6% sched_debug.cfs_rq[19]:/.spread0
8007 ± 3% +14.1% 9134 ± 4% sched_debug.cfs_rq[1]:/.exec_clock
3094 ± 7% -24.1% 2349 ± 2% sched_debug.cfs_rq[20]:/.exec_clock
54684 ± 15% -25.7% 40653 ± 14% sched_debug.cfs_rq[20]:/.min_vruntime
-222572 ± -7% +18.1% -262902 ± -5% sched_debug.cfs_rq[20]:/.spread0
3057 ± 4% -23.0% 2354 ± 10% sched_debug.cfs_rq[21]:/.exec_clock
2997 ± 4% -23.3% 2298 ± 9% sched_debug.cfs_rq[22]:/.exec_clock
50613 ± 9% -18.3% 41365 ± 16% sched_debug.cfs_rq[22]:/.min_vruntime
19.75 ± 6% -21.5% 15.50 ± 17% sched_debug.cfs_rq[22]:/.nr_spread_over
-226643 ± -4% +15.7% -262191 ± -5% sched_debug.cfs_rq[22]:/.spread0
23.25 ± 47% -72.0% 6.50 ± 65% sched_debug.cfs_rq[22]:/.util_avg
-220864 ± -5% +15.5% -255074 ± -6% sched_debug.cfs_rq[23]:/.spread0
-28540 ±-34% +95.8% -55873 ±-18% sched_debug.cfs_rq[2]:/.spread0
47.25 ± 6% +24.3% 58.75 ± 6% sched_debug.cfs_rq[5]:/.nr_spread_over
9369 ± 1% -23.3% 7187 ± 3% sched_debug.cfs_rq[6]:/.exec_clock
63.25 ± 9% -18.6% 51.50 ± 13% sched_debug.cfs_rq[6]:/.nr_spread_over
9502 ± 2% -26.7% 6965 ± 4% sched_debug.cfs_rq[7]:/.exec_clock
58.25 ± 8% -20.6% 46.25 ± 4% sched_debug.cfs_rq[7]:/.nr_spread_over
-33020 ±-61% +117.1% -71698 ±-19% sched_debug.cfs_rq[7]:/.spread0
9332 ± 3% -26.4% 6864 ± 3% sched_debug.cfs_rq[8]:/.exec_clock
254022 ± 2% -9.0% 231037 ± 1% sched_debug.cfs_rq[8]:/.min_vruntime
-23229 ±-73% +212.1% -72492 ±-19% sched_debug.cfs_rq[8]:/.spread0
9519 ± 3% -26.4% 7007 ± 3% sched_debug.cfs_rq[9]:/.exec_clock
-10930 ±-196% +425.0% -57389 ±-11% sched_debug.cfs_rq[9]:/.spread0
175146 ± 5% -28.7% 124945 ± 5% sched_debug.cpu#0.nr_switches
138.75 ± 21% -32.4% 93.75 ± 28% sched_debug.cpu#0.nr_uninterruptible
65351 ± 7% -20.6% 51877 ± 7% sched_debug.cpu#0.sched_goidle
454109 ± 3% +30.2% 591454 ± 4% sched_debug.cpu#0.ttwu_count
36456 ± 1% -35.0% 23705 ± 5% sched_debug.cpu#0.ttwu_local
159152 ± 5% -26.4% 117194 ± 5% sched_debug.cpu#1.nr_switches
60806 ± 6% -18.8% 49354 ± 5% sched_debug.cpu#1.sched_goidle
31121 ± 5% -35.6% 20032 ± 10% sched_debug.cpu#1.ttwu_local
38223 ± 4% -10.9% 34062 ± 5% sched_debug.cpu#10.nr_load_updates
165227 ± 3% -43.8% 92896 ± 3% sched_debug.cpu#10.nr_switches
257578 ± 4% -33.7% 170890 ± 6% sched_debug.cpu#10.sched_count
59662 ± 4% -33.9% 39423 ± 4% sched_debug.cpu#10.sched_goidle
161422 ± 6% -34.7% 105365 ± 5% sched_debug.cpu#10.ttwu_count
34048 ± 4% -57.5% 14483 ± 11% sched_debug.cpu#10.ttwu_local
39424 ± 7% -16.4% 32956 ± 4% sched_debug.cpu#11.nr_load_updates
172939 ± 10% -47.9% 90094 ± 6% sched_debug.cpu#11.nr_switches
273257 ± 7% -37.4% 171120 ± 6% sched_debug.cpu#11.sched_count
63637 ± 13% -40.3% 38022 ± 8% sched_debug.cpu#11.sched_goidle
155856 ± 5% -32.7% 104830 ± 5% sched_debug.cpu#11.ttwu_count
34928 ± 9% -61.4% 13465 ± 5% sched_debug.cpu#11.ttwu_local
0.50 ±100% +550.0% 3.25 ± 50% sched_debug.cpu#12.cpu_load[0]
0.33 ±141% +2825.0% 9.75 ± 46% sched_debug.cpu#12.cpu_load[1]
0.00 ± 0% +Inf% 10.75 ± 53% sched_debug.cpu#12.cpu_load[2]
34474 ± 6% -20.2% 27520 ± 4% sched_debug.cpu#12.nr_switches
-6.25 ±-44% +292.0% -24.50 ±-19% sched_debug.cpu#12.nr_uninterruptible
60396 ± 5% +29.7% 78305 ± 5% sched_debug.cpu#12.sched_count
10850 ± 3% -16.2% 9096 ± 3% sched_debug.cpu#12.sched_goidle
31456 ± 7% +35.7% 42670 ± 15% sched_debug.cpu#12.ttwu_count
8534 ± 9% -24.4% 6448 ± 6% sched_debug.cpu#12.ttwu_local
34463 ± 9% -24.4% 26054 ± 4% sched_debug.cpu#13.nr_switches
11721 ± 11% -22.0% 9139 ± 4% sched_debug.cpu#13.sched_goidle
28660 ± 21% +32.6% 38008 ± 11% sched_debug.cpu#13.ttwu_count
8099 ± 10% -29.3% 5728 ± 4% sched_debug.cpu#13.ttwu_local
7.25 ±173% +331.0% 31.25 ± 73% sched_debug.cpu#14.load
7767 ± 11% -29.6% 5469 ± 3% sched_debug.cpu#14.ttwu_local
33863 ± 7% -24.1% 25692 ± 5% sched_debug.cpu#15.nr_switches
53077 ± 8% +17.8% 62540 ± 4% sched_debug.cpu#15.sched_count
11485 ± 8% -20.6% 9120 ± 6% sched_debug.cpu#15.sched_goidle
7561 ± 8% -28.0% 5445 ± 4% sched_debug.cpu#15.ttwu_local
0.50 ±781% -1400.0% -6.50 ±-58% sched_debug.cpu#16.nr_uninterruptible
50380 ± 6% +32.3% 66650 ± 9% sched_debug.cpu#16.sched_count
32988 ± 13% +30.3% 42982 ± 10% sched_debug.cpu#16.ttwu_count
7258 ± 3% -24.6% 5474 ± 5% sched_debug.cpu#16.ttwu_local
32669 ± 7% -21.0% 25796 ± 3% sched_debug.cpu#17.nr_switches
51010 ± 7% +27.7% 65151 ± 7% sched_debug.cpu#17.sched_count
10905 ± 7% -15.1% 9263 ± 4% sched_debug.cpu#17.sched_goidle
7554 ± 7% -29.4% 5333 ± 2% sched_debug.cpu#17.ttwu_local
42588 ± 5% -57.5% 18111 ± 5% sched_debug.cpu#18.nr_switches
48946 ± 5% -45.6% 26618 ± 3% sched_debug.cpu#18.sched_count
14079 ± 6% -52.5% 6685 ± 5% sched_debug.cpu#18.sched_goidle
59721 ± 14% -34.8% 38956 ± 15% sched_debug.cpu#18.ttwu_count
9908 ± 4% -60.9% 3871 ± 5% sched_debug.cpu#18.ttwu_local
42395 ± 9% -55.6% 18832 ± 7% sched_debug.cpu#19.nr_switches
48669 ± 8% -41.7% 28376 ± 10% sched_debug.cpu#19.sched_count
14408 ± 11% -51.8% 6938 ± 4% sched_debug.cpu#19.sched_goidle
53917 ± 8% -30.3% 37553 ± 22% sched_debug.cpu#19.ttwu_count
9427 ± 4% -56.6% 4092 ± 11% sched_debug.cpu#19.ttwu_local
147731 ± 6% -23.9% 112487 ± 13% sched_debug.cpu#2.nr_switches
29874 ± 7% -38.2% 18457 ± 4% sched_debug.cpu#2.ttwu_local
40196 ± 1% -53.8% 18575 ± 7% sched_debug.cpu#20.nr_switches
46237 ± 2% -42.8% 26424 ± 7% sched_debug.cpu#20.sched_count
13149 ± 1% -48.0% 6832 ± 9% sched_debug.cpu#20.sched_goidle
58681 ± 17% -32.2% 39806 ± 7% sched_debug.cpu#20.ttwu_count
9561 ± 3% -58.7% 3948 ± 3% sched_debug.cpu#20.ttwu_local
44696 ± 12% -56.2% 19577 ± 4% sched_debug.cpu#21.nr_switches
50784 ± 13% -44.8% 28024 ± 9% sched_debug.cpu#21.sched_count
15527 ± 16% -52.4% 7393 ± 8% sched_debug.cpu#21.sched_goidle
57233 ± 8% -40.5% 34066 ± 11% sched_debug.cpu#21.ttwu_count
9417 ± 2% -56.3% 4111 ± 8% sched_debug.cpu#21.ttwu_local
24.00 ± 74% -99.0% 0.25 ±173% sched_debug.cpu#22.cpu_load[1]
19107 ± 0% -9.1% 17364 ± 2% sched_debug.cpu#22.nr_load_updates
39083 ± 4% -45.3% 21397 ± 18% sched_debug.cpu#22.nr_switches
45608 ± 4% -36.6% 28906 ± 16% sched_debug.cpu#22.sched_count
12789 ± 4% -35.0% 8314 ± 22% sched_debug.cpu#22.sched_goidle
55277 ± 8% -26.6% 40574 ± 16% sched_debug.cpu#22.ttwu_count
9233 ± 5% -57.4% 3931 ± 8% sched_debug.cpu#22.ttwu_local
10.25 ± 85% -95.1% 0.50 ±173% sched_debug.cpu#23.cpu_load[3]
8.50 ± 71% -91.2% 0.75 ±173% sched_debug.cpu#23.cpu_load[4]
40504 ± 5% -50.9% 19893 ± 11% sched_debug.cpu#23.nr_switches
46592 ± 5% -37.8% 28998 ± 12% sched_debug.cpu#23.sched_count
13555 ± 8% -44.8% 7482 ± 12% sched_debug.cpu#23.sched_goidle
9260 ± 2% -57.6% 3929 ± 5% sched_debug.cpu#23.ttwu_local
138922 ± 7% -26.0% 102743 ± 6% sched_debug.cpu#3.nr_switches
50963 ± 9% -16.1% 42759 ± 7% sched_debug.cpu#3.sched_goidle
28726 ± 1% -45.2% 15753 ± 5% sched_debug.cpu#3.ttwu_local
144321 ± 11% -33.2% 96457 ± 4% sched_debug.cpu#4.nr_switches
53378 ± 14% -25.9% 39541 ± 5% sched_debug.cpu#4.sched_goidle
28224 ± 3% -40.6% 16760 ± 8% sched_debug.cpu#4.ttwu_local
150005 ± 9% -25.7% 111426 ± 8% sched_debug.cpu#5.nr_switches
56676 ± 10% -17.4% 46823 ± 10% sched_debug.cpu#5.sched_goidle
27292 ± 8% -35.7% 17541 ± 13% sched_debug.cpu#5.ttwu_local
38010 ± 2% -10.1% 34177 ± 6% sched_debug.cpu#6.nr_load_updates
168056 ± 4% -40.0% 100828 ± 15% sched_debug.cpu#6.nr_switches
262900 ± 3% -31.4% 180287 ± 11% sched_debug.cpu#6.sched_count
61232 ± 6% -29.9% 42898 ± 18% sched_debug.cpu#6.sched_goidle
154457 ± 3% -25.4% 115273 ± 1% sched_debug.cpu#6.ttwu_count
33443 ± 2% -55.3% 14959 ± 9% sched_debug.cpu#6.ttwu_local
170525 ± 5% -42.4% 98162 ± 9% sched_debug.cpu#7.nr_switches
-46.50 ±-23% -69.9% -14.00 ±-67% sched_debug.cpu#7.nr_uninterruptible
265048 ± 4% -30.2% 185134 ± 5% sched_debug.cpu#7.sched_count
62399 ± 7% -33.0% 41796 ± 10% sched_debug.cpu#7.sched_goidle
154392 ± 1% -30.4% 107510 ± 5% sched_debug.cpu#7.ttwu_count
33054 ± 2% -53.2% 15465 ± 15% sched_debug.cpu#7.ttwu_local
37815 ± 2% -11.0% 33641 ± 5% sched_debug.cpu#8.nr_load_updates
163043 ± 5% -43.7% 91771 ± 8% sched_debug.cpu#8.nr_switches
-43.75 ±-27% -81.1% -8.25 ±-136% sched_debug.cpu#8.nr_uninterruptible
258882 ± 3% -34.3% 170056 ± 4% sched_debug.cpu#8.sched_count
58242 ± 6% -33.4% 38773 ± 10% sched_debug.cpu#8.sched_goidle
157201 ± 2% -32.7% 105794 ± 8% sched_debug.cpu#8.ttwu_count
33488 ± 3% -58.9% 13772 ± 9% sched_debug.cpu#8.ttwu_local
37709 ± 2% -11.5% 33383 ± 3% sched_debug.cpu#9.nr_load_updates
172842 ± 10% -45.0% 95016 ± 9% sched_debug.cpu#9.nr_switches
-38.75 ±-45% -76.8% -9.00 ±-82% sched_debug.cpu#9.nr_uninterruptible
270984 ± 6% -35.9% 173630 ± 5% sched_debug.cpu#9.sched_count
62874 ± 12% -36.0% 40237 ± 11% sched_debug.cpu#9.sched_goidle
156190 ± 5% -33.3% 104228 ± 6% sched_debug.cpu#9.ttwu_count
33673 ± 3% -59.4% 13669 ± 5% sched_debug.cpu#9.ttwu_local
lkp-ws02: Westmere-EP
Memory: 16G
fsmark.time.involuntary_context_switches
280000 ++-----------------------------------------------------------------+
260000 *+..*..*...*...*...*..*...*...*...*..*...*...*..*...*...*...* |
| |
240000 ++ |
220000 ++ |
200000 ++ |
180000 ++ |
| |
160000 ++ |
140000 ++ |
120000 ++ |
100000 ++ |
| |
80000 O+ O O O O O O O O O O O O O O O O O O
60000 ++-----------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
To reproduce:
git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Ying Huang
View attachment "job.yaml" of type "text/plain" (3677 bytes)
View attachment "reproduce" of type "text/plain" (774 bytes)
Powered by blists - more mailing lists