[<prev] [next>] [day] [month] [year] [list]
Message-ID: <20161107024810.GD21529@yexl-desktop>
Date: Mon, 7 Nov 2016 10:48:10 +0800
From: kernel test robot <xiaolong.ye@...el.com>
To: Huang Ying <ying.huang@...el.com>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>,
Rik van Riel <riel@...hat.com>,
Hugh Dickins <hughd@...gle.com>, Shaohua Li <shli@...nel.org>,
Minchan Kim <minchan@...nel.org>,
Mel Gorman <mgorman@...hsingularity.net>,
Tejun Heo <tj@...nel.org>,
Wu Fengguang <fengguang.wu@...el.com>,
Dave Hansen <dave.hansen@...el.com>,
Andrew Morton <akpm@...ux-foundation.org>,
LKML <linux-kernel@...r.kernel.org>, lkp@...org
Subject: [lkp] [mm] 371a096edf: vm-scalability.throughput 6.1% improvement
Greeting,
FYI, we noticed a 6.1% improvement of vm-scalability.throughput due to commit:
commit 371a096edf43a8c71844cf71c20765c8b21d07d9 ("mm: don't use radix tree writeback tags for pages in swap cache")
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git master
in testcase: vm-scalability
on test machine: 72 threads Intel(R) Xeon(R) CPU E5-2699 v3 @ 2.30GHz with 128G memory
with following parameters:
runtime: 300
thp_enabled: never
thp_defrag: never
nr_task: 16
nr_ssd: 1
test: swap-w-seq
cpufreq_governor: performance
The motivation behind this suite is to exercise functions and regions of the mm/ of the Linux kernel which are of interest to us.
In addition to that, the commit also has significant impact on the following tests:
+------------------+-----------------------------------------------------------------------+
| testcase: change | vm-scalability: vm-scalability.throughput 14.1% improvement |
| test machine | 72 threads Intel(R) Xeon(R) CPU E5-2699 v3 @ 2.30GHz with 128G memory |
| test parameters | cpufreq_governor=performance |
| | nr_ssd=1 |
| | nr_task=64 |
| | runtime=300 |
| | test=swap-w-seq |
| | thp_defrag=never |
| | thp_enabled=always |
+------------------+-----------------------------------------------------------------------+
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
=========================================================================================
compiler/cpufreq_governor/kconfig/nr_ssd/nr_task/rootfs/runtime/tbox_group/test/testcase/thp_defrag/thp_enabled:
gcc-6/performance/x86_64-rhel-7.2/1/16/debian-x86_64-2016-08-31.cgz/300/lkp-hsw-ep4/swap-w-seq/vm-scalability/never/never
commit:
1d8bf926f8 ("mm/bootmem.c: replace kzalloc() by kzalloc_node()")
371a096edf ("mm: don't use radix tree writeback tags for pages in swap cache")
1d8bf926f8739bd3 371a096edf43a8c71844cf71c2
---------------- --------------------------
fail:runs %reproduction fail:runs
| | |
%stddev %change %stddev
\ | \
2213672 ± 0% +6.1% 2348422 ± 1% vm-scalability.throughput
534544 ± 1% +6.8% 570794 ± 4% vm-scalability.time.involuntary_context_switches
1.458e+08 ± 0% +7.0% 1.56e+08 ± 1% vm-scalability.time.minor_page_faults
4991 ± 50% +3615.9% 185462 ± 31% vm-scalability.time.voluntary_context_switches
0.08 ± 27% +3440.8% 2.69 ± 50% turbostat.CPU%c3
229747 ± 5% +47.7% 339402 ± 11% softirqs.RCU
270638 ± 1% +13.0% 305899 ± 2% softirqs.SCHED
1277270 ± 0% +7.8% 1377440 ± 1% vmstat.io.bo
1277264 ± 0% +7.8% 1377434 ± 1% vmstat.swap.so
5148 ± 0% +50.8% 7765 ± 9% vmstat.system.cs
110151 ± 0% +30.6% 143842 ± 11% meminfo.SUnreclaim
9703 ± 20% -32.8% 6516 ± 28% meminfo.Shmem
783039 ± 5% -26.3% 577083 ± 17% meminfo.SwapCached
1167 ± 27% +27840.3% 326175 ± 61% meminfo.Writeback
637164 ± 5% -25.4% 475615 ± 9% numa-meminfo.node0.FilePages
243.40 ± 36% +19536.4% 47794 ± 58% numa-meminfo.node0.Writeback
50438 ± 4% +31.7% 66437 ± 12% numa-meminfo.node1.SUnreclaim
83150 ± 4% +17.5% 97667 ± 8% numa-meminfo.node1.Slab
1045 ± 24% +25899.2% 271899 ± 60% numa-meminfo.node1.Writeback
159271 ± 5% -25.4% 118746 ± 9% numa-vmstat.node0.nr_file_pages
59.60 ± 37% +19680.4% 11789 ± 61% numa-vmstat.node0.nr_writeback
60.00 ± 37% +19544.0% 11786 ± 61% numa-vmstat.node0.nr_zone_write_pending
12609 ± 4% +31.5% 16579 ± 12% numa-vmstat.node1.nr_slab_unreclaimable
265.30 ± 19% +25322.4% 67445 ± 61% numa-vmstat.node1.nr_writeback
266.80 ± 19% +25179.8% 67446 ± 61% numa-vmstat.node1.nr_zone_write_pending
7424364 ± 50% +715.3% 60527951 ± 47% cpuidle.C1-HSW.time
85779 ± 8% +394.3% 423973 ± 36% cpuidle.C1-HSW.usage
11362590 ± 16% +1931.5% 2.308e+08 ± 58% cpuidle.C1E-HSW.time
58721 ± 12% +1054.1% 677675 ± 52% cpuidle.C1E-HSW.usage
60495231 ± 13% +1096.4% 7.237e+08 ± 46% cpuidle.C3-HSW.time
181445 ± 13% +599.7% 1269496 ± 43% cpuidle.C3-HSW.usage
2126 ± 9% +798.9% 19117 ± 46% cpuidle.POLL.usage
40653369 ± 70% -88.3% 4756647 ± 58% proc-vmstat.compact_migrate_scanned
370154 ± 3% -13.7% 319306 ± 8% proc-vmstat.nr_file_pages
27536 ± 0% +30.4% 35899 ± 11% proc-vmstat.nr_slab_unreclaimable
71520720 ± 5% +12.4% 80357490 ± 3% proc-vmstat.nr_vmscan_write
292.40 ± 31% +27493.5% 80683 ± 62% proc-vmstat.nr_writeback
294.00 ± 31% +27342.9% 80682 ± 62% proc-vmstat.nr_zone_write_pending
1.262e+08 ± 3% +15.9% 1.463e+08 ± 1% proc-vmstat.numa_pte_updates
87331985 ± 1% +33.7% 1.168e+08 ± 3% proc-vmstat.pgrotated
28607602 ± 2% +22.0% 34898577 ± 2% proc-vmstat.pgscan_kswapd
14766402 ± 4% +20.4% 17782412 ± 4% proc-vmstat.pgsteal_kswapd
1828938 ± 1% +52.5% 2790009 ± 9% perf-stat.context-switches
17676 ± 6% +309.2% 72326 ± 29% perf-stat.cpu-migrations
0.06 ± 11% +112.1% 0.13 ± 23% perf-stat.dTLB-load-miss-rate%
9.969e+08 ± 13% +93.7% 1.931e+09 ± 18% perf-stat.dTLB-load-misses
0.22 ± 2% +10.0% 0.24 ± 5% perf-stat.dTLB-store-miss-rate%
8.69e+08 ± 4% +18.2% 1.027e+09 ± 9% perf-stat.dTLB-store-misses
36.04 ± 2% +8.4% 39.05 ± 2% perf-stat.iTLB-load-miss-rate%
1.331e+08 ± 3% +19.8% 1.594e+08 ± 6% perf-stat.iTLB-load-misses
1.463e+08 ± 0% +7.0% 1.565e+08 ± 1% perf-stat.minor-faults
60.76 ± 1% +2.8% 62.46 ± 0% perf-stat.node-store-miss-rate%
1.463e+08 ± 0% +7.0% 1.565e+08 ± 1% perf-stat.page-faults
178.50 ± 14% +201.1% 537.40 ± 14% slabinfo.bdev_cache.active_objs
178.50 ± 14% +201.1% 537.40 ± 14% slabinfo.bdev_cache.num_objs
444.80 ± 12% +66.6% 741.10 ± 13% slabinfo.file_lock_cache.active_objs
444.80 ± 12% +66.6% 741.10 ± 13% slabinfo.file_lock_cache.num_objs
4019 ± 1% +100.4% 8054 ± 13% slabinfo.kmalloc-1024.active_objs
128.10 ± 0% +100.5% 256.90 ± 14% slabinfo.kmalloc-1024.active_slabs
4100 ± 0% +100.8% 8232 ± 14% slabinfo.kmalloc-1024.num_objs
128.10 ± 0% +100.5% 256.90 ± 14% slabinfo.kmalloc-1024.num_slabs
7578 ± 0% +21.8% 9232 ± 7% slabinfo.kmalloc-192.active_objs
7609 ± 0% +21.4% 9238 ± 7% slabinfo.kmalloc-192.num_objs
4829 ± 1% +47.7% 7134 ± 11% slabinfo.kmalloc-2048.active_objs
312.10 ± 2% +45.2% 453.10 ± 11% slabinfo.kmalloc-2048.active_slabs
4898 ± 1% +47.9% 7246 ± 11% slabinfo.kmalloc-2048.num_objs
312.10 ± 2% +45.2% 453.10 ± 11% slabinfo.kmalloc-2048.num_slabs
16028 ± 6% +479.7% 92924 ± 51% slabinfo.kmalloc-256.active_objs
351.20 ± 4% +1071.3% 4113 ± 56% slabinfo.kmalloc-256.active_slabs
16315 ± 5% +487.8% 95897 ± 51% slabinfo.kmalloc-256.num_objs
351.20 ± 4% +1071.3% 4113 ± 56% slabinfo.kmalloc-256.num_slabs
751.10 ± 6% +74.9% 1313 ± 14% slabinfo.nsproxy.active_objs
751.10 ± 6% +74.9% 1313 ± 14% slabinfo.nsproxy.num_objs
37639 ± 4% -16.0% 31603 ± 6% slabinfo.radix_tree_node.active_objs
705.70 ± 4% -17.4% 583.10 ± 7% slabinfo.radix_tree_node.active_slabs
39498 ± 3% -17.4% 32627 ± 7% slabinfo.radix_tree_node.num_objs
705.70 ± 4% -17.4% 583.10 ± 7% slabinfo.radix_tree_node.num_slabs
146695 ± 1% -28.0% 105561 ± 17% sched_debug.cfs_rq:/.exec_clock.max
57.20 ± 73% +574.1% 385.58 ± 26% sched_debug.cfs_rq:/.exec_clock.min
50781 ± 2% -26.8% 37173 ± 18% sched_debug.cfs_rq:/.exec_clock.stddev
150.31 ± 5% -25.0% 112.67 ± 17% sched_debug.cfs_rq:/.load_avg.stddev
2792970 ± 4% -40.8% 1653864 ± 29% sched_debug.cfs_rq:/.min_vruntime.max
918436 ± 5% -38.0% 569297 ± 31% sched_debug.cfs_rq:/.min_vruntime.stddev
842.53 ± 5% -31.2% 579.83 ± 18% sched_debug.cfs_rq:/.runnable_load_avg.max
134.83 ± 5% -34.0% 89.01 ± 24% sched_debug.cfs_rq:/.runnable_load_avg.stddev
918795 ± 5% -37.9% 570175 ± 31% sched_debug.cfs_rq:/.spread0.stddev
380.58 ± 2% -14.8% 324.26 ± 10% sched_debug.cfs_rq:/.util_avg.stddev
42.86 ± 17% +1021.2% 480.60 ± 88% sched_debug.cpu.clock.stddev
42.86 ± 17% +1021.2% 480.60 ± 88% sched_debug.cpu.clock_task.stddev
843.67 ± 5% -31.3% 579.83 ± 18% sched_debug.cpu.cpu_load[0].max
134.98 ± 5% -34.0% 89.02 ± 24% sched_debug.cpu.cpu_load[0].stddev
136.36 ± 5% -30.6% 94.67 ± 26% sched_debug.cpu.cpu_load[3].stddev
136.89 ± 5% -31.3% 94.00 ± 26% sched_debug.cpu.cpu_load[4].stddev
441.52 ± 5% -16.4% 369.26 ± 13% sched_debug.cpu.curr->pid.avg
0.00 ± 10% +846.3% 0.00 ± 86% sched_debug.cpu.next_balance.stddev
155148 ± 0% -16.9% 128991 ± 11% sched_debug.cpu.nr_load_updates.max
38237 ± 2% -27.9% 27553 ± 19% sched_debug.cpu.nr_load_updates.stddev
12989 ± 1% +66.7% 21655 ± 11% sched_debug.cpu.nr_switches.avg
38611 ± 9% +50.9% 58253 ± 13% sched_debug.cpu.nr_switches.max
1120 ± 32% +350.3% 5042 ± 29% sched_debug.cpu.nr_switches.min
10487 ± 3% +27.6% 13383 ± 9% sched_debug.cpu.nr_switches.stddev
0.01 ± 14% +1005.7% 0.09 ± 38% sched_debug.cpu.nr_uninterruptible.avg
22.03 ± 19% +124.5% 49.47 ± 28% sched_debug.cpu.nr_uninterruptible.max
-25.55 ±-32% +173.8% -69.95 ±-27% sched_debug.cpu.nr_uninterruptible.min
7.95 ± 16% +144.6% 19.46 ± 18% sched_debug.cpu.nr_uninterruptible.stddev
13007 ± 2% +66.2% 21615 ± 11% sched_debug.cpu.sched_count.avg
664.22 ± 53% +590.3% 4584 ± 32% sched_debug.cpu.sched_count.min
1797 ± 3% +189.0% 5196 ± 21% sched_debug.cpu.sched_goidle.avg
112.57 ± 48% +1648.7% 1968 ± 35% sched_debug.cpu.sched_goidle.min
6018 ± 1% +76.9% 10646 ± 12% sched_debug.cpu.ttwu_count.avg
17865 ± 3% +107.9% 37145 ± 21% sched_debug.cpu.ttwu_count.max
93.22 ± 36% +600.3% 652.75 ± 27% sched_debug.cpu.ttwu_count.min
5368 ± 2% +69.2% 9082 ± 14% sched_debug.cpu.ttwu_count.stddev
4852 ± 1% +36.6% 6626 ± 8% sched_debug.cpu.ttwu_local.avg
55.75 ± 24% +310.6% 228.88 ± 16% sched_debug.cpu.ttwu_local.min
***************************************************************************************************
lkp-hsw-ep4: 72 threads Intel(R) Xeon(R) CPU E5-2699 v3 @ 2.30GHz with 128G memory
=========================================================================================
compiler/cpufreq_governor/kconfig/nr_ssd/nr_task/rootfs/runtime/tbox_group/test/testcase/thp_defrag/thp_enabled:
gcc-6/performance/x86_64-rhel-7.2/1/64/debian-x86_64-2016-08-31.cgz/300/lkp-hsw-ep4/swap-w-seq/vm-scalability/never/always
commit:
1d8bf926f8 ("mm/bootmem.c: replace kzalloc() by kzalloc_node()")
371a096edf ("mm: don't use radix tree writeback tags for pages in swap cache")
1d8bf926f8739bd3 371a096edf43a8c71844cf71c2
---------------- --------------------------
fail:runs %reproduction fail:runs
| | |
%stddev %change %stddev
\ | \
2125843 ± 0% +14.1% 2426196 ± 1% vm-scalability.throughput
741119 ± 3% +6.0% 785919 ± 2% vm-scalability.time.involuntary_context_switches
1.074e+08 ± 1% +16.5% 1.251e+08 ± 1% vm-scalability.time.minor_page_faults
6243 ± 0% -40.2% 3733 ± 10% vm-scalability.time.percent_of_cpu_this_job_got
21748 ± 0% -41.0% 12831 ± 10% vm-scalability.time.system_time
315.07 ± 1% -10.9% 280.79 ± 3% vm-scalability.time.user_time
333774 ± 2% +12.3% 374973 ± 3% softirqs.SCHED
11885858 ± 0% -39.4% 7205748 ± 9% softirqs.TIMER
40607746 ± 1% +25.4% 50919220 ± 12% numa-numastat.node0.numa_foreign
40597356 ± 1% +25.4% 50910950 ± 12% numa-numastat.node1.numa_miss
0.25 ±173% +1200.0% 3.25 ± 70% numa-numastat.node1.other_node
1211829 ± 1% +16.0% 1405484 ± 1% vmstat.io.bo
4939033 ± 3% +16.1% 5733178 ± 6% vmstat.memory.free
2.1e+08 ± 1% +14.9% 2.412e+08 ± 2% vmstat.memory.swpd
0.00 ± 0% +Inf% 31.50 ± 17% vmstat.procs.b
67.00 ± 0% -42.5% 38.50 ± 10% vmstat.procs.r
1211824 ± 1% +16.0% 1405478 ± 1% vmstat.swap.so
92.51 ± 0% -39.5% 56.01 ± 9% turbostat.%Busy
2583 ± 0% -39.4% 1566 ± 9% turbostat.Avg_MHz
6.96 ± 2% +406.3% 35.26 ± 12% turbostat.CPU%c1
0.01 ± 0% +24025.0% 2.41 ± 19% turbostat.CPU%c3
0.52 ± 21% +1125.7% 6.31 ± 14% turbostat.CPU%c6
258.28 ± 0% -12.0% 227.24 ± 1% turbostat.PkgWatt
55.36 ± 0% +1.1% 55.98 ± 0% turbostat.RAMWatt
28505239 ± 10% -31.5% 19524763 ± 15% meminfo.AnonHugePages
5537897 ± 9% -14.4% 4740114 ± 9% meminfo.DirectMap2M
5822789 ± 2% +13.2% 6594234 ± 5% meminfo.MemAvailable
6222814 ± 2% +12.5% 7001717 ± 4% meminfo.MemFree
709649 ± 0% +12.6% 798807 ± 5% meminfo.PageTables
111455 ± 0% +55.1% 172917 ± 6% meminfo.SUnreclaim
169503 ± 0% +37.7% 233321 ± 4% meminfo.Slab
1504686 ± 3% -24.8% 1131932 ± 15% meminfo.SwapCached
3028 ± 10% +22884.4% 695969 ± 20% meminfo.Writeback
43194655 ± 11% +619.4% 3.107e+08 ± 16% cpuidle.C1-HSW.time
622561 ± 17% +104.2% 1271319 ± 17% cpuidle.C1-HSW.usage
14115248 ± 15% +9297.9% 1.327e+09 ± 16% cpuidle.C1E-HSW.time
84418 ± 8% +2439.5% 2143769 ± 16% cpuidle.C1E-HSW.usage
53083465 ± 15% +2556.0% 1.41e+09 ± 21% cpuidle.C3-HSW.time
154076 ± 13% +1115.8% 1873320 ± 20% cpuidle.C3-HSW.usage
1.803e+09 ± 3% +350.2% 8.117e+09 ± 10% cpuidle.C6-HSW.time
1936956 ± 3% +347.5% 8667200 ± 11% cpuidle.C6-HSW.usage
14995046 ± 10% +2237.0% 3.504e+08 ± 14% cpuidle.POLL.time
13351 ± 17% +202.3% 40357 ± 15% cpuidle.POLL.usage
22648511 ± 14% -36.8% 14307991 ± 19% numa-meminfo.node0.AnonHugePages
1045808 ± 5% -47.9% 544659 ± 15% numa-meminfo.node0.FilePages
58493 ± 4% +57.0% 91837 ± 11% numa-meminfo.node0.SUnreclaim
85451 ± 4% +40.6% 120107 ± 9% numa-meminfo.node0.Slab
1041 ± 12% +15041.9% 157627 ± 13% numa-meminfo.node0.Writeback
5767065 ± 4% -32.4% 3896120 ± 36% numa-meminfo.node1.AnonHugePages
293938 ± 5% +50.9% 443468 ± 27% numa-meminfo.node1.PageTables
52953 ± 4% +53.2% 81112 ± 14% numa-meminfo.node1.SUnreclaim
17882 ± 4% -55.2% 8018 ± 59% numa-meminfo.node1.Shmem
84052 ± 5% +34.8% 113281 ± 11% numa-meminfo.node1.Slab
2236 ± 8% +23862.8% 535987 ± 24% numa-meminfo.node1.Writeback
11242 ± 13% -38.0% 6974 ± 21% numa-vmstat.node0.nr_anon_transparent_hugepages
262464 ± 4% -48.5% 135174 ± 17% numa-vmstat.node0.nr_file_pages
1215804 ± 3% +13.8% 1383278 ± 4% numa-vmstat.node0.nr_free_pages
1287 ± 17% +102.0% 2599 ± 24% numa-vmstat.node0.nr_isolated_anon
14625 ± 4% +54.0% 22521 ± 12% numa-vmstat.node0.nr_slab_unreclaimable
10793807 ± 9% +36.2% 14705195 ± 5% numa-vmstat.node0.nr_vmscan_write
233.00 ± 7% +15561.6% 36491 ± 15% numa-vmstat.node0.nr_writeback
10793782 ± 9% +35.9% 14668927 ± 5% numa-vmstat.node0.nr_written
233.00 ± 8% +15561.5% 36491 ± 15% numa-vmstat.node0.nr_zone_write_pending
22566322 ± 3% +31.3% 29625222 ± 12% numa-vmstat.node0.numa_foreign
2872 ± 4% -27.1% 2093 ± 23% numa-vmstat.node1.nr_anon_transparent_hugepages
404362 ± 9% +34.0% 541983 ± 19% numa-vmstat.node1.nr_free_pages
156.00 ± 11% +95.4% 304.75 ± 15% numa-vmstat.node1.nr_pages_scanned
13235 ± 4% +50.2% 19874 ± 14% numa-vmstat.node1.nr_slab_unreclaimable
49196401 ± 1% +24.1% 61044573 ± 7% numa-vmstat.node1.nr_vmscan_write
572.50 ± 14% +21710.0% 124862 ± 28% numa-vmstat.node1.nr_writeback
49196179 ± 1% +23.8% 60919929 ± 7% numa-vmstat.node1.nr_written
574.25 ± 14% +21643.7% 124863 ± 28% numa-vmstat.node1.nr_zone_write_pending
22550830 ± 3% +31.3% 29618804 ± 12% numa-vmstat.node1.numa_miss
4.086e+12 ± 0% -28.4% 2.928e+12 ± 5% perf-stat.branch-instructions
0.10 ± 0% +52.0% 0.15 ± 6% perf-stat.branch-miss-rate%
4.119e+09 ± 0% +8.5% 4.471e+09 ± 1% perf-stat.branch-misses
30.75 ± 0% -13.1% 26.72 ± 3% perf-stat.cache-miss-rate%
1.046e+10 ± 1% +9.9% 1.149e+10 ± 1% perf-stat.cache-misses
3.4e+10 ± 0% +26.5% 4.303e+10 ± 2% perf-stat.cache-references
6.545e+13 ± 0% -37.9% 4.065e+13 ± 7% perf-stat.cpu-cycles
195546 ± 4% +32.7% 259393 ± 6% perf-stat.cpu-migrations
0.04 ± 12% +84.7% 0.08 ± 10% perf-stat.dTLB-load-miss-rate%
4.077e+12 ± 0% -33.5% 2.713e+12 ± 8% perf-stat.dTLB-loads
0.18 ± 4% +16.4% 0.21 ± 2% perf-stat.dTLB-store-miss-rate%
6.92e+08 ± 3% +17.8% 8.149e+08 ± 9% perf-stat.dTLB-store-misses
37.76 ± 2% -41.7% 22.01 ± 5% perf-stat.iTLB-load-miss-rate%
42983189 ± 3% +36.3% 58591746 ± 8% perf-stat.iTLB-load-misses
70857240 ± 2% +194.4% 2.086e+08 ± 11% perf-stat.iTLB-loads
1.656e+13 ± 0% -27.9% 1.194e+13 ± 5% perf-stat.instructions
385661 ± 3% -46.6% 205896 ± 12% perf-stat.instructions-per-iTLB-miss
0.25 ± 0% +16.3% 0.29 ± 2% perf-stat.ipc
1.079e+08 ± 1% +16.4% 1.256e+08 ± 1% perf-stat.minor-faults
85.52 ± 0% -4.7% 81.49 ± 0% perf-stat.node-load-miss-rate%
9.377e+08 ± 2% +28.9% 1.208e+09 ± 4% perf-stat.node-loads
55.12 ± 0% +2.7% 56.60 ± 1% perf-stat.node-store-miss-rate%
2.191e+09 ± 0% +4.7% 2.294e+09 ± 1% perf-stat.node-store-misses
1.079e+08 ± 1% +16.4% 1.256e+08 ± 1% perf-stat.page-faults
174.75 ± 11% +206.4% 535.50 ± 38% slabinfo.bdev_cache.active_objs
174.75 ± 11% +206.4% 535.50 ± 38% slabinfo.bdev_cache.num_objs
6222 ± 1% +18.0% 7341 ± 5% slabinfo.cred_jar.active_objs
6222 ± 1% +18.0% 7341 ± 5% slabinfo.cred_jar.num_objs
453.00 ± 11% +65.0% 747.25 ± 16% slabinfo.file_lock_cache.active_objs
453.00 ± 11% +65.0% 747.25 ± 16% slabinfo.file_lock_cache.num_objs
4046 ± 1% +101.9% 8169 ± 3% slabinfo.kmalloc-1024.active_objs
126.50 ± 0% +106.5% 261.25 ± 3% slabinfo.kmalloc-1024.active_slabs
4071 ± 0% +105.6% 8372 ± 3% slabinfo.kmalloc-1024.num_objs
126.50 ± 0% +106.5% 261.25 ± 3% slabinfo.kmalloc-1024.num_slabs
7564 ± 0% +17.1% 8853 ± 1% slabinfo.kmalloc-192.active_objs
7615 ± 0% +16.3% 8860 ± 1% slabinfo.kmalloc-192.num_objs
4766 ± 1% +69.8% 8094 ± 8% slabinfo.kmalloc-2048.active_objs
307.00 ± 2% +67.9% 515.50 ± 8% slabinfo.kmalloc-2048.active_slabs
4826 ± 1% +70.5% 8227 ± 8% slabinfo.kmalloc-2048.num_objs
307.00 ± 2% +67.9% 515.50 ± 8% slabinfo.kmalloc-2048.num_slabs
14592 ± 1% +934.5% 150965 ± 34% slabinfo.kmalloc-256.active_objs
342.25 ± 1% +1924.0% 6927 ± 34% slabinfo.kmalloc-256.active_slabs
14906 ± 1% +934.8% 154254 ± 34% slabinfo.kmalloc-256.num_objs
342.25 ± 1% +1924.0% 6927 ± 34% slabinfo.kmalloc-256.num_slabs
12576 ± 3% +51.5% 19055 ± 7% slabinfo.kmalloc-512.active_objs
218.50 ± 15% +56.3% 341.50 ± 10% slabinfo.kmalloc-512.active_slabs
12727 ± 3% +54.0% 19595 ± 9% slabinfo.kmalloc-512.num_objs
218.50 ± 15% +56.3% 341.50 ± 10% slabinfo.kmalloc-512.num_slabs
765.50 ± 4% +69.1% 1294 ± 21% slabinfo.nsproxy.active_objs
765.50 ± 4% +69.1% 1294 ± 21% slabinfo.nsproxy.num_objs
1612 ± 13% -69.5% 492.25 ± 96% proc-vmstat.compact_fail
1619 ± 12% -69.1% 501.25 ± 95% proc-vmstat.compact_stall
13831 ± 9% -31.2% 9522 ± 16% proc-vmstat.nr_anon_transparent_hugepages
140989 ± 6% +25.8% 177310 ± 4% proc-vmstat.nr_dirty_background_threshold
282323 ± 6% +25.8% 355056 ± 4% proc-vmstat.nr_dirty_threshold
471239 ± 3% -20.9% 372789 ± 11% proc-vmstat.nr_file_pages
1496965 ± 6% +24.3% 1860851 ± 3% proc-vmstat.nr_free_pages
5139 ± 4% +21.5% 6246 ± 2% proc-vmstat.nr_isolated_anon
178332 ± 1% +13.1% 201722 ± 5% proc-vmstat.nr_page_table_pages
147.75 ± 8% +100.8% 296.75 ± 18% proc-vmstat.nr_pages_scanned
27862 ± 0% +52.7% 42558 ± 6% proc-vmstat.nr_slab_unreclaimable
60269851 ± 1% +25.1% 75424523 ± 5% proc-vmstat.nr_vmscan_write
774.25 ± 2% +21072.7% 163929 ± 22% proc-vmstat.nr_writeback
1.073e+08 ± 1% +15.5% 1.239e+08 ± 2% proc-vmstat.nr_written
775.50 ± 2% +21038.5% 163928 ± 22% proc-vmstat.nr_zone_write_pending
46256859 ± 1% +26.7% 58617918 ± 8% proc-vmstat.numa_foreign
13246 ± 6% -46.1% 7141 ± 17% proc-vmstat.numa_hint_faults
8187 ± 2% -42.4% 4713 ± 20% proc-vmstat.numa_hint_faults_local
46256859 ± 1% +26.7% 58617918 ± 8% proc-vmstat.numa_miss
1.394e+08 ± 0% +13.0% 1.575e+08 ± 1% proc-vmstat.pgalloc_normal
1.084e+08 ± 1% +15.4% 1.251e+08 ± 2% proc-vmstat.pgdeactivate
1.079e+08 ± 1% +16.4% 1.257e+08 ± 1% proc-vmstat.pgfault
1.377e+08 ± 1% +13.3% 1.559e+08 ± 1% proc-vmstat.pgfree
3711 ± 22% -67.7% 1198 ± 32% proc-vmstat.pgmigrate_fail
4.293e+08 ± 1% +15.5% 4.958e+08 ± 2% proc-vmstat.pgpgout
76556208 ± 1% +21.6% 93113321 ± 2% proc-vmstat.pgrefill
61504865 ± 2% +85.7% 1.142e+08 ± 5% proc-vmstat.pgrotated
1.754e+08 ± 1% +26.7% 2.223e+08 ± 4% proc-vmstat.pgscan_direct
8178772 ± 3% +61.6% 13217350 ± 18% proc-vmstat.pgscan_kswapd
1.007e+08 ± 1% +12.8% 1.136e+08 ± 2% proc-vmstat.pgsteal_direct
6418690 ± 7% +57.7% 10125267 ± 23% proc-vmstat.pgsteal_kswapd
1.073e+08 ± 1% +15.5% 1.239e+08 ± 2% proc-vmstat.pswpout
209541 ± 1% +16.5% 244206 ± 1% proc-vmstat.thp_fault_fallback
142799 ± 0% -43.9% 80147 ± 5% sched_debug.cfs_rq:/.exec_clock.avg
131668 ± 1% -69.0% 40852 ± 14% sched_debug.cfs_rq:/.exec_clock.min
5099 ± 13% +520.8% 31657 ± 20% sched_debug.cfs_rq:/.exec_clock.stddev
227.88 ± 4% +35.0% 307.70 ± 15% sched_debug.cfs_rq:/.load_avg.min
246.74 ± 26% -53.7% 114.30 ± 23% sched_debug.cfs_rq:/.load_avg.stddev
9367426 ± 0% -58.8% 3856380 ± 10% sched_debug.cfs_rq:/.min_vruntime.avg
10027888 ± 0% -46.2% 5391954 ± 6% sched_debug.cfs_rq:/.min_vruntime.max
8536208 ± 2% -70.5% 2518516 ± 18% sched_debug.cfs_rq:/.min_vruntime.min
308749 ± 8% +188.8% 891811 ± 21% sched_debug.cfs_rq:/.min_vruntime.stddev
0.82 ± 2% -31.3% 0.56 ± 15% sched_debug.cfs_rq:/.nr_running.avg
0.23 ± 13% +43.2% 0.33 ± 8% sched_debug.cfs_rq:/.nr_running.stddev
36.70 ± 13% -67.1% 12.06 ± 20% sched_debug.cfs_rq:/.nr_spread_over.avg
123.42 ± 19% -68.2% 39.20 ± 35% sched_debug.cfs_rq:/.nr_spread_over.max
8.92 ± 14% -64.9% 3.13 ± 52% sched_debug.cfs_rq:/.nr_spread_over.min
23.11 ± 20% -74.8% 5.83 ± 27% sched_debug.cfs_rq:/.nr_spread_over.stddev
32.16 ± 6% -37.6% 20.08 ± 33% sched_debug.cfs_rq:/.runnable_load_avg.avg
653879 ± 47% +135.8% 1541932 ± 15% sched_debug.cfs_rq:/.spread0.max
315599 ± 9% +184.2% 896957 ± 21% sched_debug.cfs_rq:/.spread0.stddev
898.96 ± 1% -27.4% 652.99 ± 15% sched_debug.cfs_rq:/.util_avg.avg
673946 ± 3% +24.3% 837742 ± 4% sched_debug.cpu.avg_idle.avg
49038 ± 8% +259.6% 176360 ± 60% sched_debug.cpu.avg_idle.min
301898 ± 6% -25.5% 225046 ± 21% sched_debug.cpu.avg_idle.stddev
190551 ± 0% +9.0% 207609 ± 2% sched_debug.cpu.clock.max
233.98 ± 3% +2164.8% 5299 ± 53% sched_debug.cpu.clock.stddev
190551 ± 0% +9.0% 207609 ± 2% sched_debug.cpu.clock_task.max
233.98 ± 3% +2164.8% 5299 ± 53% sched_debug.cpu.clock_task.stddev
32.11 ± 6% -37.8% 19.98 ± 33% sched_debug.cpu.cpu_load[0].avg
35.42 ± 9% -37.8% 22.02 ± 26% sched_debug.cpu.cpu_load[1].avg
133.36 ± 15% -39.1% 81.19 ± 37% sched_debug.cpu.cpu_load[1].stddev
34.65 ± 7% -38.1% 21.45 ± 27% sched_debug.cpu.cpu_load[2].avg
126.41 ± 9% -38.0% 78.32 ± 38% sched_debug.cpu.cpu_load[2].stddev
34.35 ± 6% -38.4% 21.15 ± 28% sched_debug.cpu.cpu_load[3].avg
123.58 ± 6% -38.4% 76.07 ± 38% sched_debug.cpu.cpu_load[3].stddev
34.45 ± 5% -38.7% 21.11 ± 28% sched_debug.cpu.cpu_load[4].avg
123.29 ± 4% -38.7% 75.54 ± 38% sched_debug.cpu.cpu_load[4].stddev
1367 ± 2% -31.7% 934.09 ± 15% sched_debug.cpu.curr->pid.avg
0.00 ± 2% +2051.8% 0.01 ± 53% sched_debug.cpu.next_balance.stddev
148590 ± 1% -25.6% 110559 ± 3% sched_debug.cpu.nr_load_updates.avg
139700 ± 1% -43.5% 78865 ± 5% sched_debug.cpu.nr_load_updates.min
3733 ± 13% +519.4% 23123 ± 21% sched_debug.cpu.nr_load_updates.stddev
0.84 ± 2% -31.9% 0.57 ± 17% sched_debug.cpu.nr_running.avg
0.28 ± 10% +28.5% 0.36 ± 10% sched_debug.cpu.nr_running.stddev
14395 ± 9% +34.0% 19289 ± 23% sched_debug.cpu.nr_switches.min
0.11 ± 43% +470.4% 0.62 ± 52% sched_debug.cpu.nr_uninterruptible.avg
7400 ± 21% +31.3% 9718 ± 15% sched_debug.cpu.sched_goidle.avg
1050 ± 7% +245.9% 3632 ± 38% sched_debug.cpu.sched_goidle.min
40897 ± 23% +43.3% 58610 ± 7% sched_debug.cpu.ttwu_count.max
10300 ± 11% -51.3% 5018 ± 29% sched_debug.cpu.ttwu_count.min
5838 ± 26% +99.6% 11652 ± 12% sched_debug.cpu.ttwu_count.stddev
7617 ± 7% +18.5% 9024 ± 5% sched_debug.cpu.ttwu_local.avg
4191 ± 25% +36.8% 5733 ± 9% sched_debug.cpu.ttwu_local.stddev
Thanks,
Xiaolong
View attachment "config-4.8.0-10066-g371a096" of type "text/plain" (153641 bytes)
View attachment "job-script" of type "text/plain" (7082 bytes)
View attachment "job.yaml" of type "text/plain" (4740 bytes)
View attachment "reproduce" of type "text/plain" (575 bytes)
Powered by blists - more mailing lists