[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <eb352106-5a3c-63fe-2409-494cf0a31bfc@intel.com>
Date: Tue, 30 Aug 2022 11:17:35 +0800
From: kernel test robot <yujie.liu@...el.com>
To: Filipe Manana <fdmanana@...e.com>
CC: <lkp@...ts.01.org>, <lkp@...el.com>,
David Sterba <dsterba@...e.com>,
Anand Jain <anand.jain@...cle.com>,
Nikolay Borisov <nborisov@...e.com>,
<linux-kernel@...r.kernel.org>, <linux-btrfs@...r.kernel.org>,
<ying.huang@...el.com>, <feng.tang@...el.com>,
<zhengjun.xing@...ux.intel.com>, <fengwei.yin@...el.com>,
<regressions@...ts.linux.dev>
Subject: [btrfs] ca6dee6b79: fxmark.ssd_btrfs_MWRM_72_bufferedio.works/sec
-8.4% regression
Greeting,
FYI, we noticed a -8.4% regression of fxmark.ssd_btrfs_MWRM_72_bufferedio.works/sec due to commit:
commit: ca6dee6b7946794fa340a7290ca399a50b61705f ("btrfs: balance btree dirty pages and delayed items after a rename")
https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git master
in testcase: fxmark
on test machine: 128 threads 2 sockets Intel(R) Xeon(R) Platinum 8358 CPU @ 2.60GHz (Ice Lake) with 128G memory
with following parameters:
disk: 1SSD
media: ssd
test: MWRM
fstype: btrfs
directio: bufferedio
cpufreq_governor: performance
ucode: 0xd000363
test-description: FxMark is a filesystem benchmark that test multicore scalability.
test-url: https://github.com/sslab-gatech/fxmark
=========================================================================================
compiler/cpufreq_governor/directio/disk/fstype/kconfig/media/rootfs/tbox_group/test/testcase/ucode:
gcc-11/performance/bufferedio/1SSD/btrfs/x86_64-rhel-8.3/ssd/debian-11.1-x86_64-20220510.cgz/lkp-icl-2sp5/MWRM/fxmark/0xd000363
commit:
b8bea09a45 ("btrfs: add trace event for submitted RAID56 bio")
ca6dee6b79 ("btrfs: balance btree dirty pages and delayed items after a rename")
b8bea09a456fc31a ca6dee6b7946794fa340a7290ca
---------------- ---------------------------
%stddev %change %stddev
\ | \
1821853 -13.9% 1568247 ± 3% fxmark.ssd_btrfs_MWRM_36_bufferedio.works
36436 -13.9% 31362 ± 3% fxmark.ssd_btrfs_MWRM_36_bufferedio.works/sec
1675102 -14.0% 1439994 ± 7% fxmark.ssd_btrfs_MWRM_54_bufferedio.works
33497 -14.0% 28796 ± 7% fxmark.ssd_btrfs_MWRM_54_bufferedio.works/sec
1572332 -8.4% 1440600 ± 6% fxmark.ssd_btrfs_MWRM_72_bufferedio.works
31445 -8.4% 28809 ± 6% fxmark.ssd_btrfs_MWRM_72_bufferedio.works/sec
356010 +80.0% 640832 fxmark.time.involuntary_context_switches
68.50 -24.1% 52.00 fxmark.time.percent_of_cpu_this_job_got
630.47 -24.0% 479.23 fxmark.time.system_time
1.335e+10 +49.8% 2e+10 cpuidle..time
1045 ± 4% +11.8% 1168 uptime.idle
31.54 +50.2% 47.37 iostat.cpu.idle
64.16 -24.7% 48.29 iostat.cpu.system
31.17 +50.3% 46.83 vmstat.cpu.id
12.83 ± 5% -55.8% 5.67 ± 8% vmstat.procs.r
32.13 +15.8 47.95 mpstat.cpu.all.idle%
0.47 ± 7% +0.1 0.53 ± 3% mpstat.cpu.all.iowait%
63.37 -16.1 47.31 mpstat.cpu.all.sys%
10.04 ± 3% +13.5% 11.39 ± 3% perf-stat.i.metric.K/sec
869.81 ± 10% -16.2% 728.74 ± 15% perf-stat.i.node-loads
871.23 ± 10% -16.2% 730.49 ± 15% perf-stat.ps.node-loads
3004 ± 8% -52.1% 1440 ± 6% numa-meminfo.node0.Active(anon)
1218568 -10.8% 1086453 numa-meminfo.node0.Inactive
351812 ± 3% -29.0% 249640 ± 12% numa-meminfo.node0.Inactive(anon)
120150 -79.3% 24861 ± 3% numa-meminfo.node0.Shmem
3489 ± 8% -45.0% 1919 ± 2% meminfo.Active(anon)
492107 -19.0% 398809 meminfo.Committed_AS
382253 -24.6% 288151 meminfo.Inactive(anon)
124727 -76.8% 28886 ± 2% meminfo.Shmem
2050 ± 4% -10.5% 1834 ± 5% meminfo.Writeback
750.83 ± 8% -52.1% 360.00 ± 6% numa-vmstat.node0.nr_active_anon
87951 ± 3% -29.0% 62408 ± 12% numa-vmstat.node0.nr_inactive_anon
30038 -79.3% 6216 ± 3% numa-vmstat.node0.nr_shmem
750.83 ± 8% -52.1% 360.00 ± 6% numa-vmstat.node0.nr_zone_active_anon
87951 ± 3% -29.0% 62408 ± 12% numa-vmstat.node0.nr_zone_inactive_anon
7554028 ± 3% -71.2% 2174126 ± 5% sched_debug.cfs_rq:/.min_vruntime.avg
7640393 ± 3% -70.5% 2254050 ± 5% sched_debug.cfs_rq:/.min_vruntime.max
7291209 ± 3% -73.6% 1926973 ± 5% sched_debug.cfs_rq:/.min_vruntime.min
873.62 ± 7% -19.2% 705.68 ± 10% sched_debug.cfs_rq:/.runnable_avg.avg
790.32 ± 7% -21.4% 621.34 ± 12% sched_debug.cfs_rq:/.runnable_avg.min
747.11 ± 3% -22.7% 577.37 ± 3% sched_debug.cfs_rq:/.util_avg.avg
670.92 ± 5% -25.2% 501.70 ± 2% sched_debug.cfs_rq:/.util_avg.min
409.44 ± 9% -35.1% 265.80 ± 21% sched_debug.cfs_rq:/.util_est_enqueued.avg
789.44 ± 3% -20.1% 630.53 ± 5% sched_debug.cfs_rq:/.util_est_enqueued.max
0.00 ± 13% -67.3% 0.00 ± 22% sched_debug.cpu.next_balance.stddev
872.67 ± 8% -45.0% 479.83 ± 2% proc-vmstat.nr_active_anon
1801345 -1.7% 1771330 proc-vmstat.nr_file_pages
95550 -24.6% 72037 proc-vmstat.nr_inactive_anon
8752 -3.7% 8426 proc-vmstat.nr_mapped
31169 -76.8% 7221 ± 2% proc-vmstat.nr_shmem
872.67 ± 8% -45.0% 479.83 ± 2% proc-vmstat.nr_zone_active_anon
95550 -24.6% 72037 proc-vmstat.nr_zone_inactive_anon
9553 ± 10% -16.8% 7950 ± 3% proc-vmstat.numa_hint_faults
18886391 -3.6% 18207624 proc-vmstat.numa_hit
18770999 -3.6% 18091363 proc-vmstat.numa_local
7398756 -4.0% 7105675 proc-vmstat.pgactivate
18885154 -3.6% 18206666 proc-vmstat.pgalloc_normal
7248262 -4.3% 6933915 ± 2% proc-vmstat.pgdeactivate
18894473 -3.4% 18243898 proc-vmstat.pgfree
7829962 -3.0% 7596447 ± 2% proc-vmstat.pgrotated
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <yujie.liu@...el.com>
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
sudo bin/lkp install job.yaml # job file is attached in this email
bin/lkp split-job --compatible job.yaml # generate the yaml file for lkp run
sudo bin/lkp run generated-yaml-file
# if come across any failure that blocks the test,
# please remove ~/.lkp and /lkp dir to run from a clean state.
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
#regzbot introduced: ca6dee6b79
--
0-DAY CI Kernel Test Service
https://01.org/lkp
View attachment "config-5.19.0-rc8-00020-gca6dee6b7946" of type "text/plain" (170603 bytes)
View attachment "job-script" of type "text/plain" (8418 bytes)
View attachment "job.yaml" of type "text/plain" (5819 bytes)
View attachment "reproduce" of type "text/plain" (265 bytes)
Powered by blists - more mailing lists