[<prev] [next>] [day] [month] [year] [list]
Message-ID: <87y4b5gq8q.fsf@yhuang-dev.intel.com>
Date: Mon, 01 Feb 2016 09:43:01 +0800
From: kernel test robot <ying.huang@...ux.intel.com>
TO: Vladimir Davydov <vdavydov@...tuozzo.com>
CC: 0day robot <fengguang.wu@...el.com>
Subject: [lkp] [vmpressure] 78d1788350: No primary result change, 274.1%
vm-scalability.time.involuntary_context_switches
FYI, we noticed the below changes on
https://github.com/0day-ci/linux Vladimir-Davydov/vmpressure-Fix-subtree-pressure-detection/20160128-003153
commit 78d1788350477246d516496d8d7684fa80ef7f18 ("vmpressure: Fix subtree pressure detection")
=========================================================================================
compiler/cpufreq_governor/kconfig/rootfs/runtime/tbox_group/test/testcase:
gcc-4.9/performance/x86_64-rhel/debian-x86_64-2015-02-07.cgz/300s/ivb43/lru-file-mmap-read/vm-scalability
commit:
v4.5-rc1
78d1788350477246d516496d8d7684fa80ef7f18
v4.5-rc1 78d1788350477246d516496d8d
---------------- --------------------------
%stddev %change %stddev
\ | \
195486 ± 1% +274.1% 731248 ± 2% vm-scalability.time.involuntary_context_switches
2939 ± 21% +37.5% 4041 ± 12% numa-vmstat.node1.nr_anon_pages
897378 ± 1% +17.7% 1056357 ± 2% softirqs.RCU
4293 ± 4% +82.5% 7835 ± 2% vmstat.system.cs
120.40 ± 22% +41.6% 170.50 ± 19% cpuidle.C3-IVT.usage
4.723e+08 ± 4% +13.8% 5.375e+08 ± 5% cpuidle.C6-IVT.time
93336 ± 31% +611.8% 664396 ± 88% latency_stats.max.pipe_wait.pipe_read.__vfs_read.vfs_read.SyS_read.entry_SYSCALL_64_fastpath
371333 ±166% -70.1% 110861 ±117% latency_stats.sum.pipe_write.__vfs_write.vfs_write.SyS_write.entry_SYSCALL_64_fastpath
195486 ± 1% +274.1% 731248 ± 2% time.involuntary_context_switches
717.20 ± 14% +20.3% 862.50 ± 6% time.major_page_faults
2724 ± 60% -93.6% 174.25 ±173% numa-meminfo.node0.AnonHugePages
1630 ±122% +222.2% 5254 ± 16% numa-meminfo.node1.AnonHugePages
11760 ± 21% +37.6% 16177 ± 12% numa-meminfo.node1.AnonPages
1.01 ± 4% +26.5% 1.28 ± 10% perf-profile.cycles-pp.__add_to_page_cache_locked.add_to_page_cache_lru.mpage_readpages.xfs_vm_readpages.__do_page_cache_readahead
0.90 ± 56% -72.4% 0.25 ± 95% perf-profile.cycles-pp.__alloc_pages_nodemask.alloc_pages_current.__page_cache_alloc.__do_page_cache_readahead.filemap_fault
0.90 ± 56% -72.5% 0.25 ± 95% perf-profile.cycles-pp.__page_cache_alloc.__do_page_cache_readahead.filemap_fault.xfs_filemap_fault.__do_fault
2.18 ± 2% +15.9% 2.52 ± 7% perf-profile.cycles-pp.__remove_mapping.shrink_page_list.shrink_inactive_list.shrink_zone_memcg.shrink_zone
1.04 ± 4% +16.8% 1.21 ± 7% perf-profile.cycles-pp.__rmqueue.get_page_from_freelist.__alloc_pages_nodemask.alloc_pages_current.__page_cache_alloc
1.08 ± 6% +18.1% 1.28 ± 5% perf-profile.cycles-pp._raw_spin_lock_irqsave.__remove_mapping.shrink_page_list.shrink_inactive_list.shrink_zone_memcg
0.90 ± 56% -72.4% 0.25 ± 95% perf-profile.cycles-pp.alloc_pages_current.__page_cache_alloc.__do_page_cache_readahead.filemap_fault.xfs_filemap_fault
0.77 ± 20% +47.0% 1.14 ± 9% perf-profile.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.__remove_mapping.shrink_page_list.shrink_inactive_list
1.38 ± 2% +15.8% 1.60 ± 7% perf-profile.cycles-pp.rmap_walk.try_to_unmap.shrink_page_list.shrink_inactive_list.shrink_zone_memcg
2.17 ± 2% -14.7% 1.85 ± 9% perf-profile.cycles-pp.shrink_inactive_list.shrink_zone_memcg.shrink_zone.kswapd.kthread
2.19 ± 2% -14.5% 1.87 ± 9% perf-profile.cycles-pp.shrink_zone.kswapd.kthread.ret_from_fork
2.18 ± 1% -14.8% 1.86 ± 9% perf-profile.cycles-pp.shrink_zone_memcg.shrink_zone.kswapd.kthread.ret_from_fork
1.46 ± 2% +19.5% 1.75 ± 10% perf-profile.cycles-pp.try_to_unmap.shrink_page_list.shrink_inactive_list.shrink_zone_memcg.shrink_zone
50.20 ±133% +273.0% 187.25 ±109% sched_debug.cfs_rq:/.load.5
56.60 ± 11% +97.4% 111.75 ± 55% sched_debug.cfs_rq:/.load_avg.0
17.60 ± 9% +25.0% 22.00 ± 14% sched_debug.cfs_rq:/.load_avg.19
47.00 ± 33% -54.8% 21.25 ± 13% sched_debug.cfs_rq:/.load_avg.23
21.60 ± 30% +139.6% 51.75 ± 63% sched_debug.cfs_rq:/.load_avg.30
20.80 ± 30% +50.2% 31.25 ± 26% sched_debug.cfs_rq:/.load_avg.36
16.40 ± 2% +18.9% 19.50 ± 11% sched_debug.cfs_rq:/.runnable_load_avg.14
16685 ±278% -374.2% -45753 ±-25% sched_debug.cfs_rq:/.spread0.12
119599 ± 47% -71.8% 33762 ±121% sched_debug.cfs_rq:/.spread0.27
106418 ± 42% -267.6% -178399 ±-221% sched_debug.cfs_rq:/.spread0.28
56797 ± 81% -136.5% -20757 ±-159% sched_debug.cfs_rq:/.spread0.3
63128 ± 63% -157.9% -36549 ±-295% sched_debug.cfs_rq:/.spread0.46
57759 ±100% -676.1% -332737 ±-164% sched_debug.cfs_rq:/.spread0.8
151767 ± 32% -40.2% 90684 ± 14% sched_debug.cfs_rq:/.spread0.max
855.00 ± 1% +7.5% 918.75 ± 7% sched_debug.cfs_rq:/.util_avg.20
16.40 ± 2% +18.9% 19.50 ± 11% sched_debug.cpu.cpu_load[0].14
16.40 ± 2% +18.9% 19.50 ± 11% sched_debug.cpu.cpu_load[1].14
16.40 ± 2% +18.9% 19.50 ± 11% sched_debug.cpu.cpu_load[2].14
16.40 ± 2% +18.9% 19.50 ± 11% sched_debug.cpu.cpu_load[3].14
16.40 ± 2% +18.9% 19.50 ± 11% sched_debug.cpu.cpu_load[4].14
1305 ± 1% +16.6% 1521 ± 13% sched_debug.cpu.curr->pid.23
50.20 ±133% +273.0% 187.25 ±109% sched_debug.cpu.load.5
0.00 ± 0% +Inf% 1.00 ± 0% sched_debug.cpu.nr_running.5
10459 ± 30% +86.1% 19462 ± 13% sched_debug.cpu.nr_switches.12
11042 ±113% +169.5% 29759 ± 31% sched_debug.cpu.nr_switches.14
17516 ± 69% +85.9% 32567 ± 46% sched_debug.cpu.nr_switches.18
8287 ± 69% +271.4% 30775 ± 71% sched_debug.cpu.nr_switches.21
16525 ± 59% +80.6% 29850 ± 41% sched_debug.cpu.nr_switches.23
8306 ± 36% +92.8% 16016 ± 8% sched_debug.cpu.nr_switches.24
9271 ± 64% +149.7% 23153 ± 30% sched_debug.cpu.nr_switches.25
6094 ± 49% +243.8% 20951 ± 31% sched_debug.cpu.nr_switches.26
6569 ± 32% +194.9% 19373 ± 17% sched_debug.cpu.nr_switches.30
10034 ±111% +295.8% 39717 ± 41% sched_debug.cpu.nr_switches.31
7417 ± 70% +205.4% 22657 ± 35% sched_debug.cpu.nr_switches.38
9204 ± 69% +125.4% 20746 ± 26% sched_debug.cpu.nr_switches.42
8742 ± 68% +172.8% 23845 ± 14% sched_debug.cpu.nr_switches.47
11267 ± 75% +311.6% 46375 ± 68% sched_debug.cpu.nr_switches.7
14373 ± 4% +74.8% 25131 ± 3% sched_debug.cpu.nr_switches.avg
68237 ± 15% +30.3% 88895 ± 10% sched_debug.cpu.nr_switches.max
1792 ± 11% +610.9% 12741 ± 4% sched_debug.cpu.nr_switches.min
-1.00 ±-244% -200.0% 1.00 ±212% sched_debug.cpu.nr_uninterruptible.21
-29.60 ±-22% -31.7% -20.21 ±-29% sched_debug.cpu.nr_uninterruptible.min
11570 ± 27% +75.8% 20344 ± 14% sched_debug.cpu.sched_count.12
18187 ± 52% +67.7% 30504 ± 40% sched_debug.cpu.sched_count.23
8874 ± 33% +83.7% 16301 ± 7% sched_debug.cpu.sched_count.24
9847 ± 60% +137.9% 23423 ± 29% sched_debug.cpu.sched_count.25
7299 ± 38% +190.8% 21225 ± 30% sched_debug.cpu.sched_count.26
7156 ± 30% +179.8% 20022 ± 16% sched_debug.cpu.sched_count.30
10606 ±105% +279.7% 40278 ± 41% sched_debug.cpu.sched_count.31
7977 ± 66% +187.5% 22938 ± 35% sched_debug.cpu.sched_count.38
10713 ± 47% +138.5% 25554 ± 54% sched_debug.cpu.sched_count.39
25911 ± 63% +76.9% 45826 ± 39% sched_debug.cpu.sched_count.4
9758 ± 66% +115.6% 21039 ± 26% sched_debug.cpu.sched_count.42
9252 ± 64% +160.6% 24110 ± 14% sched_debug.cpu.sched_count.47
12153 ± 69% +296.0% 48126 ± 65% sched_debug.cpu.sched_count.7
110528 ± 0% +9.1% 120637 ± 0% sched_debug.cpu.sched_count.avg
2364 ± 8% +455.3% 13129 ± 5% sched_debug.cpu.sched_count.min
816.60 ± 32% -40.5% 485.75 ± 16% sched_debug.cpu.sched_goidle.17
383.20 ± 22% +56.1% 598.00 ± 22% sched_debug.cpu.sched_goidle.23
204.20 ± 12% +210.7% 634.50 ± 86% sched_debug.cpu.sched_goidle.25
190.60 ± 26% +54.6% 294.75 ± 12% sched_debug.cpu.sched_goidle.31
4752 ± 36% +107.1% 9844 ± 12% sched_debug.cpu.ttwu_count.12
5428 ±115% +178.2% 15102 ± 31% sched_debug.cpu.ttwu_count.14
8902 ± 73% +80.8% 16095 ± 47% sched_debug.cpu.ttwu_count.18
3698 ± 62% +315.1% 15351 ± 75% sched_debug.cpu.ttwu_count.21
8371 ± 57% +80.6% 15121 ± 41% sched_debug.cpu.ttwu_count.23
4569 ± 33% +85.9% 8496 ± 10% sched_debug.cpu.ttwu_count.24
4884 ± 56% +152.6% 12339 ± 27% sched_debug.cpu.ttwu_count.25
3187 ± 39% +258.7% 11434 ± 23% sched_debug.cpu.ttwu_count.26
3513 ± 31% +183.5% 9962 ± 19% sched_debug.cpu.ttwu_count.30
5462 ± 97% +283.9% 20970 ± 41% sched_debug.cpu.ttwu_count.31
5605 ± 43% +85.8% 10415 ± 34% sched_debug.cpu.ttwu_count.34
4011 ± 66% +190.7% 11661 ± 33% sched_debug.cpu.ttwu_count.38
4305 ± 41% +183.7% 12215 ± 59% sched_debug.cpu.ttwu_count.39
4828 ± 73% +125.9% 10906 ± 25% sched_debug.cpu.ttwu_count.42
4560 ± 74% +168.1% 12227 ± 12% sched_debug.cpu.ttwu_count.47
5663 ± 78% +313.7% 23431 ± 67% sched_debug.cpu.ttwu_count.7
7326 ± 4% +74.8% 12808 ± 3% sched_debug.cpu.ttwu_count.avg
34268 ± 15% +29.6% 44398 ± 9% sched_debug.cpu.ttwu_count.max
792.03 ± 20% +698.7% 6325 ± 5% sched_debug.cpu.ttwu_count.min
3951 ± 45% +119.7% 8681 ± 13% sched_debug.cpu.ttwu_local.12
4585 ±137% +205.0% 13987 ± 33% sched_debug.cpu.ttwu_local.14
8045 ± 80% +88.6% 15173 ± 48% sched_debug.cpu.ttwu_local.18
2941 ± 79% +386.9% 14319 ± 79% sched_debug.cpu.ttwu_local.21
7262 ± 67% +95.1% 14172 ± 44% sched_debug.cpu.ttwu_local.23
3991 ± 40% +100.4% 8000 ± 10% sched_debug.cpu.ttwu_local.24
4237 ± 68% +162.6% 11128 ± 29% sched_debug.cpu.ttwu_local.25
2570 ± 41% +292.2% 10081 ± 30% sched_debug.cpu.ttwu_local.26
2891 ± 40% +215.8% 9132 ± 21% sched_debug.cpu.ttwu_local.30
4665 ±113% +318.2% 19509 ± 43% sched_debug.cpu.ttwu_local.31
4533 ± 50% +115.0% 9748 ± 35% sched_debug.cpu.ttwu_local.34
4131 ± 56% +145.5% 10144 ± 47% sched_debug.cpu.ttwu_local.37
3349 ± 82% +225.3% 10897 ± 34% sched_debug.cpu.ttwu_local.38
3630 ± 48% +219.9% 11611 ± 63% sched_debug.cpu.ttwu_local.39
4138 ± 82% +140.4% 9947 ± 29% sched_debug.cpu.ttwu_local.42
3356 ± 78% +238.4% 11359 ± 14% sched_debug.cpu.ttwu_local.47
4866 ± 91% +360.6% 22415 ± 71% sched_debug.cpu.ttwu_local.7
6406 ± 5% +84.0% 11787 ± 4% sched_debug.cpu.ttwu_local.avg
33023 ± 15% +29.7% 42825 ± 11% sched_debug.cpu.ttwu_local.max
450.27 ± 20% +1199.8% 5852 ± 4% sched_debug.cpu.ttwu_local.min
1.58 ± 4% -55.1% 0.71 ± 99% sched_debug.rt_rq:/.rt_time.12
ivb43: Ivytown Ivy Bridge-EP
Memory: 64G
vm-scalability.time.involuntary_context_switches
800000 ++-----------------------------------------------------------------+
| O O O O O O O |
700000 ++ O O O O O O O O O O O
| |
600000 O+ |
| O O |
500000 ++ O O O O |
| |
400000 ++ |
| |
300000 ++ |
| |
200000 *+.*..*.*..*..*..*..*.*..*..*..*..*.*..*..*..*.*..*..*..*..*.* |
| |
100000 ++-----------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
To reproduce:
git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Ying Huang
View attachment "job.yaml" of type "text/plain" (3463 bytes)
Download attachment "reproduce.sh" of type "application/x-sh" (17034 bytes)
Powered by blists - more mailing lists