[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20181211091429.GO23332@shao2-debian>
Date: Tue, 11 Dec 2018 17:14:29 +0800
From: kernel test robot <rong.a.chen@...el.com>
To: NeilBrown <neilb@...e.com>
Cc: Jeff Layton <jlayton@...nel.org>, Martin Wilck <mwilck@...e.de>,
"J. Bruce Fields" <bfields@...hat.com>,
LKML <linux-kernel@...r.kernel.org>,
Jeff Layton <jlayton@...hat.com>, lkp@...org
Subject: [LKP] [fs/locks] fd7732e033: stress-ng.eventfd.ops 21.9%
improvement
Greeting,
FYI, we noticed a 21.9% improvement of stress-ng.eventfd.ops due to commit:
commit: fd7732e033e30b3a586923b57e338c859e17858a ("fs/locks: create a tree of dependent requests.")
https://git.kernel.org/cgit/linux/kernel/git/jlayton/linux.git locks-next
in testcase: stress-ng
on test machine: 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 64G memory
with following parameters:
nr_threads: 100%
disk: 1HDD
testtime: 1s
class: filesystem
ucode: 0xb00002e
cpufreq_governor: performance
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
=========================================================================================
class/compiler/cpufreq_governor/disk/kconfig/nr_threads/rootfs/tbox_group/testcase/testtime/ucode:
filesystem/gcc-7/performance/1HDD/x86_64-rhel-7.2/100%/debian-x86_64-2018-04-03.cgz/lkp-bdw-ep3/stress-ng/1s/0xb00002e
commit:
c0e1590897 ("fs/locks: change all *_conflict() functions to return bool.")
fd7732e033 ("fs/locks: create a tree of dependent requests.")
c0e15908979d269a fd7732e033e30b3a586923b57e
---------------- --------------------------
%stddev %change %stddev
\ | \
1184559 ± 14% +21.9% 1443641 ± 12% stress-ng.eventfd.ops
1184698 ± 14% +21.9% 1443665 ± 12% stress-ng.eventfd.ops_per_sec
32423 ± 3% +241.2% 110636 stress-ng.fcntl.ops
32422 ± 3% +241.2% 110628 stress-ng.fcntl.ops_per_sec
1.906e+08 -6.5% 1.781e+08 perf-stat.node-loads
1053 ±142% -95.4% 48.25 meminfo.Active(file)
293886 ± 10% -23.5% 224694 ± 13% meminfo.DirectMap4k
13992028 ± 4% -8.7% 12777987 ± 4% numa-numastat.node0.local_node
13996312 ± 4% -8.7% 12782281 ± 4% numa-numastat.node0.numa_hit
602968 ± 10% -51.1% 294959 ± 31% turbostat.C1E
0.83 ± 4% -0.2 0.60 ± 14% turbostat.C1E%
31784409 ± 4% -27.5% 23043591 ± 14% cpuidle.C1E.time
606812 ± 10% -50.9% 298135 ± 30% cpuidle.C1E.usage
293255 ± 17% -31.2% 201788 ± 10% cpuidle.POLL.time
263.00 ±142% -95.4% 12.00 proc-vmstat.nr_active_file
263.00 ±142% -95.4% 12.00 proc-vmstat.nr_zone_active_file
27429189 ± 3% -8.9% 24977810 ± 6% proc-vmstat.numa_hit
27411960 ± 3% -8.9% 24960610 ± 6% proc-vmstat.numa_local
24864411 ± 4% -6.4% 23283878 ± 5% proc-vmstat.unevictable_pgs_culled
6.16 ± 84% -6.2 0.00 perf-profile.calltrace.cycles-pp.__d_lookup.lookup_fast.path_openat.do_filp_open.do_sys_open
6.16 ± 84% -6.2 0.00 perf-profile.calltrace.cycles-pp.lookup_fast.path_openat.do_filp_open.do_sys_open.do_syscall_64
6.16 ± 84% -6.2 0.00 perf-profile.children.cycles-pp.__d_lookup
6.16 ± 84% -6.2 0.00 perf-profile.children.cycles-pp.lookup_fast
6.57 ±100% -5.5 1.04 ±173% perf-profile.children.cycles-pp.vsnprintf
6.16 ± 84% -6.2 0.00 perf-profile.self.cycles-pp.__d_lookup
4899 ± 24% -27.5% 3550 ± 6% sched_debug.cfs_rq:/.min_vruntime.avg
13466 ± 18% -19.7% 10814 ± 9% sched_debug.cfs_rq:/.min_vruntime.max
604.50 ± 19% -17.3% 500.00 ± 2% sched_debug.cfs_rq:/.util_est_enqueued.max
11536 ±111% -79.6% 2357 ± 22% sched_debug.cpu.avg_idle.min
164.50 ± 4% +15.2% 189.50 ± 5% sched_debug.cpu.nr_switches.min
12.25 ± 23% +67.3% 20.50 ± 14% sched_debug.cpu.nr_uninterruptible.max
3.16 ± 3% +13.7% 3.59 ± 5% sched_debug.cpu.nr_uninterruptible.stddev
226.75 ± 55% -74.4% 58.00 slabinfo.btrfs_extent_buffer.active_objs
226.75 ± 55% -74.4% 58.00 slabinfo.btrfs_extent_buffer.num_objs
213.00 ± 26% +41.9% 302.25 ± 16% slabinfo.nfs_commit_data.active_objs
213.00 ± 26% +41.9% 302.25 ± 16% slabinfo.nfs_commit_data.num_objs
279.75 ± 34% +57.7% 441.25 ± 21% slabinfo.nfs_read_data.active_objs
279.75 ± 34% +57.7% 441.25 ± 21% slabinfo.nfs_read_data.num_objs
654.25 ± 17% +22.2% 799.25 ± 9% slabinfo.skbuff_fclone_cache.active_objs
654.25 ± 17% +22.2% 799.25 ± 9% slabinfo.skbuff_fclone_cache.num_objs
stress-ng.fcntl.ops
120000 +-+----------------------------------------------------------------+
O O O O O O O O O O O O O O O O O O |
110000 +-O O O O O O O |
100000 +-+ |
| |
90000 +-+ |
80000 +-+ |
| |
70000 +-+ |
60000 +-+ |
| |
50000 +-+ |
40000 +-+ |
|. .+.+..+.+. .+.+..+.+. .+.+.+..+. .+. .+.. .+. .+. .+.|
30000 +-+----------------------------------------------------------------+
stress-ng.fcntl.ops_per_sec
120000 +-+----------------------------------------------------------------+
O O O O O O O O O O O O O O O O O O |
110000 +-O O O O O O O |
100000 +-+ |
| |
90000 +-+ |
80000 +-+ |
| |
70000 +-+ |
60000 +-+ |
| |
50000 +-+ |
40000 +-+ |
|. .+.+..+.+. .+.+..+.+. .+.+.+..+. .+. .+.. .+. .+. .+.|
30000 +-+----------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Rong Chen
View attachment "config-4.20.0-rc2-00010-gfd7732e" of type "text/plain" (168537 bytes)
View attachment "job-script" of type "text/plain" (7366 bytes)
View attachment "job.yaml" of type "text/plain" (4945 bytes)
View attachment "reproduce" of type "text/plain" (254 bytes)
Powered by blists - more mailing lists