[<prev] [next>] [day] [month] [year] [list]
Message-ID: <202601211419.2b6838bf-lkp@intel.com>
Date: Wed, 21 Jan 2026 15:05:41 +0800
From: kernel test robot <oliver.sang@...el.com>
To: Mateusz Guzik <mjguzik@...il.com>
CC: <oe-lkp@...ts.linux.dev>, <lkp@...el.com>, <linux-kernel@...r.kernel.org>,
Christian Brauner <brauner@...nel.org>, <linux-fsdevel@...r.kernel.org>,
<oliver.sang@...el.com>
Subject: [linus:master] [fs] 177fdbae39:
fxmark.ssd_ext4_no_jnl_MRPL_4_bufferedio.works/sec 6.8% improvement
Hello,
kernel test robot noticed a 6.8% improvement of fxmark.ssd_ext4_no_jnl_MRPL_4_bufferedio.works/sec on:
commit: 177fdbae39ecccb441d45e5e5ab146ea35b03d49 ("fs: inline step_into() and walk_component()")
https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git master
testcase: fxmark
config: x86_64-rhel-9.4
compiler: gcc-14
test machine: 64 threads 2 sockets Intel(R) Xeon(R) Gold 6346 CPU @ 3.10GHz (Ice Lake) with 256G memory
parameters:
disk: 1SSD
media: ssd
test: MRPL
fstype: ext4_no_jnl
directio: bufferedio
thread_nr: 4
cpufreq_governor: performance
Details are as below:
-------------------------------------------------------------------------------------------------->
The kernel config and materials to reproduce are available at:
https://download.01.org/0day-ci/archive/20260121/202601211419.2b6838bf-lkp@intel.com
=========================================================================================
compiler/cpufreq_governor/directio/disk/fstype/kconfig/media/rootfs/tbox_group/test/testcase/thread_nr:
gcc-14/performance/bufferedio/1SSD/ext4_no_jnl/x86_64-rhel-9.4/ssd/debian-11.1-x86_64-20220510.cgz/lkp-icl-2sp8/MRPL/fxmark/4
commit:
9d2a6211a7 ("fs: tidy up step_into() & friends before inlining")
177fdbae39 ("fs: inline step_into() and walk_component()")
9d2a6211a7b97256 177fdbae39ecccb441d45e5e5ab
---------------- ---------------------------
%stddev %change %stddev
\ | \
7403946 +6.8% 7907512 fxmark.ssd_ext4_no_jnl_MRPL_4_bufferedio.works/sec
8028 ± 3% -13.1% 6980 ± 5% numa-meminfo.node0.KernelStack
5780 ± 5% +18.5% 6851 ± 5% numa-meminfo.node1.KernelStack
8029 ± 3% -13.0% 6982 ± 6% numa-vmstat.node0.nr_kernel_stack
5781 ± 5% +18.5% 6852 ± 5% numa-vmstat.node1.nr_kernel_stack
3107 ± 13% -14.2% 2665 perf-sched.total_wait_and_delay.max.ms
3107 ± 13% -14.2% 2665 perf-sched.total_wait_time.max.ms
1610350 +2.0% 1642966 perf-stat.i.cache-references
1587301 +2.0% 1619507 perf-stat.ps.cache-references
6.94 ± 83% -6.1 0.88 ±223% perf-profile.calltrace.cycles-pp.iput.__dentry_kill.dput.__fput.task_work_run
8.67 ± 84% -4.8 3.89 ±143% perf-profile.calltrace.cycles-pp.mutex_unlock.sw_perf_event_destroy.__free_event.perf_event_release_kernel.perf_release
7.84 ± 83% -3.3 4.52 ±163% perf-profile.calltrace.cycles-pp.tlb_finish_mmu.exit_mmap.__mmput.exit_mm.do_exit
5.17 ±117% -2.3 2.86 ±144% perf-profile.calltrace.cycles-pp.__tlb_batch_free_encoded_pages.tlb_finish_mmu.exit_mmap.__mmput.exit_mm
5.17 ±117% -2.3 2.86 ±144% perf-profile.calltrace.cycles-pp.folios_put_refs.free_pages_and_swap_cache.__tlb_batch_free_encoded_pages.tlb_finish_mmu.exit_mmap
5.17 ±117% -2.3 2.86 ±144% perf-profile.calltrace.cycles-pp.free_pages_and_swap_cache.__tlb_batch_free_encoded_pages.tlb_finish_mmu.exit_mmap.__mmput
6.94 ± 83% -6.1 0.88 ±223% perf-profile.children.cycles-pp.iput
10.34 ± 99% -5.3 5.00 ±152% perf-profile.children.cycles-pp.mutex_unlock
7.84 ± 83% -3.3 4.52 ±163% perf-profile.children.cycles-pp.tlb_finish_mmu
5.17 ±117% -2.3 2.86 ±144% perf-profile.children.cycles-pp.__tlb_batch_free_encoded_pages
5.17 ±117% -2.3 2.86 ±144% perf-profile.children.cycles-pp.free_pages_and_swap_cache
6.94 ± 83% -6.1 0.88 ±223% perf-profile.self.cycles-pp.iput
10.34 ± 99% -5.3 5.00 ±152% perf-profile.self.cycles-pp.mutex_unlock
3.78 ±100% -1.4 2.38 ±223% perf-profile.self.cycles-pp.zap_present_ptes
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki
Powered by blists - more mailing lists