[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CAL3q7H6Vk_Szvy2W66ZLikqkVEdvyLPuqpvcBdYcGKD5-gEzvA@mail.gmail.com>
Date: Thu, 12 Jun 2025 14:17:03 +0100
From: Filipe Manana <fdmanana@...nel.org>
To: kernel test robot <oliver.sang@...el.com>
Cc: Filipe Manana <fdmanana@...e.com>, oe-lkp@...ts.linux.dev, lkp@...el.com,
linux-kernel@...r.kernel.org, David Sterba <dsterba@...e.com>,
linux-btrfs@...r.kernel.org
Subject: Re: [linus:master] [btrfs] 32c523c578: reaim.jobs_per_min 16.1% regression
On Thu, Jun 12, 2025 at 7:26 AM kernel test robot <oliver.sang@...el.com> wrote:
>
>
>
> Hello,
>
> kernel test robot noticed a 16.1% regression of reaim.jobs_per_min on:
>
>
> commit: 32c523c578e8489f55663ce8a8860079c8deb414 ("btrfs: allow folios to be released while ordered extent is finishing")
> https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git master
>
> [still regression on linus/master 5abc7438f1e9d62e91ad775cc83c9594c48d2282]
> [still regression on linux-next/master 911483b25612c8bc32a706ba940738cc43299496]
>
> testcase: reaim
> config: x86_64-rhel-9.4
> compiler: gcc-12
> test machine: 64 threads 2 sockets Intel(R) Xeon(R) Gold 6346 CPU @ 3.10GHz (Ice Lake) with 256G memory
> parameters:
>
> runtime: 300s
> nr_task: 100%
> disk: 1HDD
> fs: btrfs
> test: disk
> cpufreq_governor: performance
>
>
> In addition to that, the commit also has significant impact on the following tests:
>
> +------------------+-------------------------------------------------------------------------------------------+
> | testcase: change | reaim: reaim.jobs_per_min 5.3% regression |
Here it says 5.3% regression instead of 16.1%.
The phrasing right above suggests it's a different test, but it's
confusing since there's nothing here in the table description (test
parameters) that differs from the test details above.
It's the same test but on a different machine IIUC.
> | test machine | 8 threads 1 sockets Intel(R) Core(TM) i7-3770K CPU @ 3.50GHz (Ivy Bridge) with 16G memory |
> | test parameters | cpufreq_governor=performance |
> | | disk=1HDD |
> | | fs=btrfs |
> | | nr_task=100% |
> | | runtime=300s |
> | | test=disk |
> +------------------+-------------------------------------------------------------------------------------------+
>
>
> If you fix the issue in a separate patch/commit (i.e. not just a new version of
> the same patch/commit), kindly add following tags
> | Reported-by: kernel test robot <oliver.sang@...el.com>
> | Closes: https://lore.kernel.org/oe-lkp/202506121329.9d1f4d50-lkp@intel.com
>
>
> Details are as below:
> -------------------------------------------------------------------------------------------------->
>
>
> The kernel config and materials to reproduce are available at:
> https://download.01.org/0day-ci/archive/20250612/202506121329.9d1f4d50-lkp@intel.com
Following the instructions from the 'reproduce' file, it fails to
install the job:
fdmanana 13:28:49 ~/git/hub/lkp-tests (master)> sudo ./bin/lkp install job.yaml
distro=debian
version=13
Use: /home/fdmanana/git/hub/lkp-tests/distro/installer/debian install
bc debianutils gawk gzip kmod numactl rsync time automake bison
build-essential bzip2 ca-certificates cpio fakeroot flex git
libarchive-tools libc6-dev:i386 libc6-dev-x32 libipc-run-perl
libklibc-dev libssl-dev libtool linux-cpupower linux-libc-dev:i386
linux-perf openssl patch rsync ruby ruby-dev wget
Hit:1 http://deb.debian.org/debian testing InRelease
Reading package lists... Done
Reading package lists...
Building dependency tree...
Reading state information...
(...)
Reading state information...
E: Unable to locate package libpython2
E: Unable to locate package libpython3
Cannot install some packages of perf-c2c depends
fdmanana 13:29:05 ~/git/hub/lkp-tests (master)>
Alternatively, tried to run reaim directly following the script from
https://download.01.org/0day-ci/archive/20250612/202506121329.9d1f4d50-lkp@intel.com/repro-script
The problem is I can't find debian packages for reaim, and going to
sourceforge to get the source, and the latest is
osdl-aim-7.0.1.13.tar.gz (from 2004!), it doesn't compile, there are
tons on warnings and errors.
Applying the patch from lkp at
lkp-tests/programs/reaim/pkg/reaim.patch doesn't help either.
>
> =========================================================================================
> compiler/cpufreq_governor/disk/fs/kconfig/nr_task/rootfs/runtime/tbox_group/test/testcase:
> gcc-12/performance/1HDD/btrfs/x86_64-rhel-9.4/100%/debian-12-x86_64-20240206.cgz/300s/lkp-icl-2sp7/disk/reaim
>
> commit:
> cbfb4cbf45 ("btrfs: update comment for try_release_extent_state()")
> 32c523c578 ("btrfs: allow folios to be released while ordered extent is finishing")
>
> cbfb4cbf459d9be4 32c523c578e8489f55663ce8a88
> ---------------- ---------------------------
> %stddev %change %stddev
> \ | \
> 71009 -14.7% 60593 meminfo.Shmem
> 2.344e+10 -10.8% 2.09e+10 cpuidle..time
> 6491212 -20.8% 5141230 cpuidle..usage
> 93.14 -1.5% 91.75 iostat.cpu.idle
> 5.81 +24.8% 7.24 iostat.cpu.iowait
> 419.83 -9.5% 380.03 uptime.boot
> 24988 -10.7% 22325 uptime.idle
> 584.67 ą 3% -23.6% 446.83 ą 5% perf-c2c.DRAM.remote
> 551.00 ą 10% -19.1% 445.50 ą 9% perf-c2c.HITM.local
> 407.83 ą 5% -25.4% 304.17 ą 6% perf-c2c.HITM.remote
> 5.84 +1.5 7.29 mpstat.cpu.all.iowait%
> 0.09 -0.0 0.07 mpstat.cpu.all.irq%
> 0.53 -0.1 0.46 mpstat.cpu.all.sys%
> 0.36 ą 2% +0.0 0.40 ą 4% mpstat.cpu.all.usr%
> 2110659 ą 3% -22.4% 1638521 ą 3% numa-numastat.node0.local_node
> 2132066 ą 3% -21.5% 1673684 ą 3% numa-numastat.node0.numa_hit
> 2054843 ą 4% -16.8% 1709588 ą 2% numa-numastat.node1.local_node
> 2099801 ą 3% -17.1% 1740752 ą 2% numa-numastat.node1.numa_hit
> 61.66 +186.4% 176.60 vmstat.io.bi
> 17553 -16.2% 14712 vmstat.io.bo
> 3.91 ą 3% +25.7% 4.92 ą 4% vmstat.procs.b
> 9668 -12.3% 8476 vmstat.system.cs
> 13868 -10.9% 12357 vmstat.system.in
> 521965 ą 4% -27.8% 376885 ą 2% numa-vmstat.node0.nr_dirtied
> 500007 ą 4% -27.9% 360465 ą 2% numa-vmstat.node0.nr_written
> 2131472 ą 3% -21.5% 1673240 ą 3% numa-vmstat.node0.numa_hit
> 2110065 ą 3% -22.4% 1638077 ą 3% numa-vmstat.node0.numa_local
> 423228 ą 4% -21.9% 330636 ą 4% numa-vmstat.node1.nr_dirtied
> 405391 ą 4% -21.9% 316667 ą 4% numa-vmstat.node1.nr_written
> 2099495 ą 4% -17.1% 1739892 ą 2% numa-vmstat.node1.numa_hit
> 2054537 ą 4% -16.8% 1708728 ą 2% numa-vmstat.node1.numa_local
> 4262 -16.1% 3575 reaim.jobs_per_min
> 66.61 -16.1% 55.87 reaim.jobs_per_min_child
> 4305 -15.6% 3635 reaim.max_jobs_per_min
> 90.09 +19.2% 107.42 reaim.parent_time
> 1.07 ą 4% -15.8% 0.90 ą 3% reaim.std_dev_percent
> 369.06 -10.9% 328.84 reaim.time.elapsed_time
> 369.06 -10.9% 328.84 reaim.time.elapsed_time.max
> 45840 +155.6% 117178 reaim.time.file_system_inputs
> 5943434 -25.9% 4401353 reaim.time.file_system_outputs
> 18556 -29.3% 13119 reaim.time.involuntary_context_switches
> 3738859 -25.0% 2804459 reaim.time.minor_page_faults
> 28.83 -16.8% 24.00 reaim.time.percent_of_cpu_this_job_got
> 93.94 -24.8% 70.62 reaim.time.system_time
> 1061461 -24.9% 797506 reaim.time.voluntary_context_switches
> 25600 -25.0% 19200 reaim.workload
> 187046 -1.3% 184574 proc-vmstat.nr_active_anon
> 945195 -25.1% 707517 proc-vmstat.nr_dirtied
> 1133127 -1.6% 1115274 proc-vmstat.nr_file_pages
> 214899 -7.1% 199637 proc-vmstat.nr_inactive_file
> 17745 -14.7% 15145 proc-vmstat.nr_shmem
> 905398 -25.2% 677133 proc-vmstat.nr_written
> 187046 -1.3% 184574 proc-vmstat.nr_zone_active_anon
> 214899 -7.1% 199637 proc-vmstat.nr_zone_inactive_file
> 4233781 -19.3% 3415500 proc-vmstat.numa_hit
> 4167415 -19.6% 3349180 proc-vmstat.numa_local
> 4401990 -19.1% 3559895 proc-vmstat.pgalloc_normal
> 4837041 -21.2% 3813117 proc-vmstat.pgfault
> 4058646 -20.7% 3220351 proc-vmstat.pgfree
> 22920 +155.6% 58589 proc-vmstat.pgpgin
> 6526981 -25.2% 4881946 proc-vmstat.pgpgout
> 48196 -8.6% 44072 ą 3% proc-vmstat.pgreuse
> 2.95 +0.1 3.06 perf-stat.i.branch-miss-rate%
> 9010149 +3.5% 9323057 perf-stat.i.branch-misses
> 15272312 -6.6% 14260618 perf-stat.i.cache-references
> 9701 -12.3% 8509 perf-stat.i.context-switches
> 1.97 -1.7% 1.94 perf-stat.i.cpi
> 2.028e+09 -3.4% 1.958e+09 perf-stat.i.cpu-cycles
> 345.48 -12.7% 301.68 perf-stat.i.cpu-migrations
> 1061 +2.2% 1083 perf-stat.i.cycles-between-cache-misses
> 0.53 +2.0% 0.54 perf-stat.i.ipc
> 3.86 ą 2% -20.1% 3.09 ą 3% perf-stat.i.major-faults
> 12520 -11.9% 11027 perf-stat.i.minor-faults
> 12524 -11.9% 11030 perf-stat.i.page-faults
> 3.66 +0.1 3.81 perf-stat.overall.branch-miss-rate%
> 16.24 ą 5% +1.2 17.46 ą 6% perf-stat.overall.cache-miss-rate%
> 1.64 -3.2% 1.59 perf-stat.overall.cpi
> 0.61 +3.3% 0.63 perf-stat.overall.ipc
> 17792434 +18.5% 21081619 perf-stat.overall.path-length
> 8979944 +3.5% 9292382 perf-stat.ps.branch-misses
> 15229067 -6.6% 14216663 perf-stat.ps.cache-references
> 9674 -12.3% 8483 perf-stat.ps.context-switches
> 2.022e+09 -3.5% 1.953e+09 perf-stat.ps.cpu-cycles
> 344.62 -12.7% 300.82 perf-stat.ps.cpu-migrations
> 3.85 ą 2% -20.2% 3.08 ą 4% perf-stat.ps.major-faults
> 12486 -12.0% 10993 perf-stat.ps.minor-faults
> 12490 -12.0% 10996 perf-stat.ps.page-faults
> 4.555e+11 -11.1% 4.048e+11 perf-stat.total.instructions
> 24864 ą 2% -18.6% 20241 ą 6% sched_debug.cfs_rq:/.avg_vruntime.avg
> 9646 ą 3% -32.1% 6545 ą 11% sched_debug.cfs_rq:/.avg_vruntime.min
> 88.62 ą 7% +17.8% 104.42 ą 6% sched_debug.cfs_rq:/.load_avg.avg
> 165.70 ą 8% +21.4% 201.23 ą 6% sched_debug.cfs_rq:/.load_avg.stddev
> 24864 ą 2% -18.6% 20241 ą 6% sched_debug.cfs_rq:/.min_vruntime.avg
> 9646 ą 3% -32.1% 6545 ą 11% sched_debug.cfs_rq:/.min_vruntime.min
> 7.59 ą 18% +40.4% 10.66 ą 7% sched_debug.cfs_rq:/.util_est.avg
> 197.81 ą 13% +32.3% 261.71 ą 17% sched_debug.cfs_rq:/.util_est.max
> 31.28 ą 12% +30.3% 40.77 ą 10% sched_debug.cfs_rq:/.util_est.stddev
> 229863 -17.2% 190282 ą 7% sched_debug.cpu.clock.avg
> 229868 -17.2% 190287 ą 7% sched_debug.cpu.clock.max
> 229857 -17.2% 190276 ą 7% sched_debug.cpu.clock.min
> 229303 -17.2% 189808 ą 7% sched_debug.cpu.clock_task.avg
> 229667 -17.2% 190143 ą 7% sched_debug.cpu.clock_task.max
> 219476 -17.8% 180349 ą 7% sched_debug.cpu.clock_task.min
> 749.88 ą 29% -28.8% 533.95 ą 14% sched_debug.cpu.curr->pid.avg
> 32645 -29.7% 22956 ą 8% sched_debug.cpu.curr->pid.max
> 4481 ą 10% -29.9% 3143 ą 10% sched_debug.cpu.curr->pid.stddev
> 29063 -30.0% 20341 ą 9% sched_debug.cpu.nr_switches.avg
> 83586 ą 2% -29.8% 58716 ą 7% sched_debug.cpu.nr_switches.max
> 22149 ą 2% -33.0% 14831 ą 11% sched_debug.cpu.nr_switches.min
> 7977 ą 2% -25.6% 5937 ą 5% sched_debug.cpu.nr_switches.stddev
> 92.43 ą 28% -47.8% 48.21 ą 10% sched_debug.cpu.nr_uninterruptible.max
> 25.24 ą 8% -26.2% 18.62 ą 9% sched_debug.cpu.nr_uninterruptible.stddev
> 229859 -17.2% 190278 ą 7% sched_debug.cpu_clk
> 229148 -17.3% 189567 ą 7% sched_debug.ktime
> 230484 -17.2% 190908 ą 7% sched_debug.sched_clk
> 0.00 ą136% +392.6% 0.02 ą 80% perf-sched.sch_delay.avg.ms.__cond_resched.dput.terminate_walk.path_openat.do_filp_open
> 0.03 ą114% -56.9% 0.01 ą 5% perf-sched.sch_delay.avg.ms.schedule_preempt_disabled.rwsem_down_read_slowpath.down_read.btrfs_tree_read_lock_nested
> 0.25 ą209% -95.1% 0.01 ą 13% perf-sched.sch_delay.avg.ms.schedule_preempt_disabled.rwsem_down_write_slowpath.down_write.open_last_lookups
> 0.21 ą 40% -38.8% 0.13 ą 11% perf-sched.sch_delay.max.ms.wait_current_trans.start_transaction.btrfs_attach_transaction_barrier.btrfs_sync_fs
> 0.07 ą 42% +126.0% 0.16 ą 72% perf-sched.sch_delay.max.ms.wait_extent_bit.__lock_extent.lock_extents_for_read.constprop.0
> 26708 ą 2% -9.5% 24158 ą 3% perf-sched.total_wait_and_delay.count.ms
> 3.20 ą 9% +25.5% 4.02 ą 11% perf-sched.wait_and_delay.avg.ms.btrfs_commit_transaction.iterate_supers.ksys_sync.__x64_sys_sync
> 7.26 ą 15% +50.7% 10.94 ą 19% perf-sched.wait_and_delay.avg.ms.btrfs_start_ordered_extent_nowriteback.btrfs_run_ordered_extent_work.btrfs_work_helper.process_one_work
> 19.16 ą 29% -65.5% 6.61 ą142% perf-sched.wait_and_delay.avg.ms.schedule_preempt_disabled.__mutex_lock.constprop.0.btrfs_wait_ordered_roots
> 0.02 ą 59% +188.1% 0.05 ą 63% perf-sched.wait_and_delay.avg.ms.schedule_preempt_disabled.rwsem_down_write_slowpath.down_write.sync_inodes_sb
> 15.73 ą 19% +50.8% 23.72 ą 10% perf-sched.wait_and_delay.avg.ms.schedule_timeout.__wait_for_common.wait_for_completion_state.kernel_clone
> 256.17 ą 15% +34.5% 344.47 ą 13% perf-sched.wait_and_delay.avg.ms.wait_current_trans.start_transaction.btrfs_create_common.lookup_open.isra
> 78.54 ą 8% +18.1% 92.78 ą 11% perf-sched.wait_and_delay.avg.ms.wait_for_commit.btrfs_commit_transaction.iterate_supers.ksys_sync
> 67.31 ą 3% +12.5% 75.72 ą 5% perf-sched.wait_and_delay.avg.ms.wait_for_commit.btrfs_wait_for_commit.btrfs_attach_transaction_barrier.btrfs_sync_fs
> 805.00 ą 11% -21.9% 629.00 ą 11% perf-sched.wait_and_delay.count.btrfs_commit_transaction.iterate_supers.ksys_sync.__x64_sys_sync
> 833.33 ą 5% -29.6% 586.50 ą 13% perf-sched.wait_and_delay.count.btrfs_start_ordered_extent_nowriteback.btrfs_run_ordered_extent_work.btrfs_work_helper.process_one_work
> 2990 ą 5% -16.5% 2498 ą 3% perf-sched.wait_and_delay.count.io_schedule.folio_wait_bit_common.folio_wait_writeback.__filemap_fdatawait_range
> 151.17 ą 6% -70.6% 44.50 ą141% perf-sched.wait_and_delay.count.schedule_preempt_disabled.__mutex_lock.constprop.0.btrfs_wait_ordered_roots
> 553.50 ą 5% -26.3% 407.67 ą 9% perf-sched.wait_and_delay.count.schedule_timeout.__wait_for_common.btrfs_wait_ordered_extents.btrfs_wait_ordered_roots
> 618.50 ą 9% -25.4% 461.17 ą 12% perf-sched.wait_and_delay.count.wait_current_trans.start_transaction.btrfs_sync_file.btrfs_do_write_iter
> 955.00 ą 8% -27.5% 692.17 ą 8% perf-sched.wait_and_delay.count.wait_log_commit.btrfs_sync_log.btrfs_sync_file.btrfs_do_write_iter
> 551.40 ą 10% +33.8% 737.66 ą 8% perf-sched.wait_and_delay.max.ms.io_schedule.folio_wait_bit_common.folio_wait_writeback.__filemap_fdatawait_range
> 251.59 ą 18% +74.5% 438.99 ą 14% perf-sched.wait_and_delay.max.ms.wait_current_trans.start_transaction.btrfs_attach_transaction_barrier.btrfs_sync_fs
> 11.70 ą 99% -100.0% 0.00 ą223% perf-sched.wait_time.avg.ms.__cond_resched.btree_write_cache_pages.do_writepages.filemap_fdatawrite_wbc.__filemap_fdatawrite_range
> 44.61 ą189% +762.7% 384.84 ą 81% perf-sched.wait_time.avg.ms.__cond_resched.down_write.unlink_anon_vmas.free_pgtables.exit_mmap
> 3.19 ą 9% +25.6% 4.00 ą 11% perf-sched.wait_time.avg.ms.btrfs_commit_transaction.iterate_supers.ksys_sync.__x64_sys_sync
> 7.24 ą 15% +50.9% 10.93 ą 19% perf-sched.wait_time.avg.ms.btrfs_start_ordered_extent_nowriteback.btrfs_run_ordered_extent_work.btrfs_work_helper.process_one_work
> 1.69 ą 79% +178.9% 4.71 ą 62% perf-sched.wait_time.avg.ms.io_schedule.folio_wait_bit_common.extent_write_cache_pages.btrfs_writepages
> 15.72 ą 19% +50.9% 23.71 ą 10% perf-sched.wait_time.avg.ms.schedule_timeout.__wait_for_common.wait_for_completion_state.kernel_clone
> 20.99 ą 2% +38.2% 29.02 ą 15% perf-sched.wait_time.avg.ms.schedule_timeout.btrfs_sync_log.btrfs_sync_file.btrfs_do_write_iter
> 12.97 ą 13% +49.0% 19.32 ą 16% perf-sched.wait_time.avg.ms.schedule_timeout.io_schedule_timeout.__wait_for_common.barrier_all_devices
> 256.15 ą 15% +34.5% 344.45 ą 13% perf-sched.wait_time.avg.ms.wait_current_trans.start_transaction.btrfs_create_common.lookup_open.isra
> 10.94 ą 25% +84.6% 20.19 ą 24% perf-sched.wait_time.avg.ms.wait_extent_bit.__lock_extent.lock_and_cleanup_extent_if_need.copy_one_range
> 78.51 ą 8% +18.1% 92.75 ą 11% perf-sched.wait_time.avg.ms.wait_for_commit.btrfs_commit_transaction.iterate_supers.ksys_sync
> 67.26 ą 3% +12.5% 75.66 ą 5% perf-sched.wait_time.avg.ms.wait_for_commit.btrfs_wait_for_commit.btrfs_attach_transaction_barrier.btrfs_sync_fs
> 19.50 ą128% -100.0% 0.00 ą223% perf-sched.wait_time.max.ms.__cond_resched.btree_write_cache_pages.do_writepages.filemap_fdatawrite_wbc.__filemap_fdatawrite_range
> 83.75 ą187% +516.6% 516.38 ą 68% perf-sched.wait_time.max.ms.__cond_resched.down_write.unlink_anon_vmas.free_pgtables.exit_mmap
> 551.39 ą 10% +33.8% 737.64 ą 8% perf-sched.wait_time.max.ms.io_schedule.folio_wait_bit_common.folio_wait_writeback.__filemap_fdatawait_range
> 251.57 ą 18% +74.5% 438.95 ą 14% perf-sched.wait_time.max.ms.wait_current_trans.start_transaction.btrfs_attach_transaction_barrier.btrfs_sync_fs
> 86.74 ą 14% -27.4% 62.97 ą 11% perf-sched.wait_time.max.ms.wait_extent_bit.__lock_extent.lock_and_cleanup_extent_if_need.copy_one_range
>
>
> ***************************************************************************************************
> lkp-ivb-d01: 8 threads 1 sockets Intel(R) Core(TM) i7-3770K CPU @ 3.50GHz (Ivy Bridge) with 16G memory
> =========================================================================================
> compiler/cpufreq_governor/disk/fs/kconfig/nr_task/rootfs/runtime/tbox_group/test/testcase:
> gcc-12/performance/1HDD/btrfs/x86_64-rhel-9.4/100%/debian-12-x86_64-20240206.cgz/300s/lkp-ivb-d01/disk/reaim
>
> commit:
> cbfb4cbf45 ("btrfs: update comment for try_release_extent_state()")
> 32c523c578 ("btrfs: allow folios to be released while ordered extent is finishing")
>
> cbfb4cbf459d9be4 32c523c578e8489f55663ce8a88
> ---------------- ---------------------------
> %stddev %change %stddev
> \ | \
> 22777 +101.6% 45913 ą 13% proc-vmstat.pgpgin
> 12134 ą 4% +18.9% 14426 ą 7% sched_debug.cpu.nr_switches.stddev
> 80.30 -1.8% 78.89 iostat.cpu.idle
> 17.77 +8.1% 19.21 iostat.cpu.iowait
> 54.02 +101.9% 109.06 vmstat.io.bi
> 4061 -5.0% 3856 vmstat.io.bo
> 2.60 +2.2% 2.65 reaim.child_systime
> 348.91 -5.3% 330.27 reaim.jobs_per_min
> 43.61 -5.3% 41.28 reaim.jobs_per_min_child
> 356.27 -5.4% 336.88 reaim.max_jobs_per_min
> 137.64 +5.6% 145.41 reaim.parent_time
> 45554 +101.6% 91826 ą 13% reaim.time.file_system_inputs
> 0.17 ą 25% -0.1 0.09 ą 31% perf-profile.children.cycles-pp.select_task_rq_fair
> 0.16 ą 29% -0.1 0.10 ą 33% perf-profile.children.cycles-pp.vfs_fstatat
> 0.21 ą 16% -0.1 0.15 ą 12% perf-profile.children.cycles-pp.___perf_sw_event
> 0.06 ą 79% +0.1 0.16 ą 33% perf-profile.children.cycles-pp.vms_complete_munmap_vmas
> 0.07 ą 72% +0.1 0.18 ą 16% perf-profile.children.cycles-pp.vms_clear_ptes
> 0.19 ą 18% -0.1 0.12 ą 19% perf-profile.self.cycles-pp.___perf_sw_event
> 0.15 ą 30% +0.1 0.23 ą 33% perf-profile.self.cycles-pp.x86_pmu_disable
> 0.04 ą 9% +21.8% 0.05 ą 11% perf-sched.sch_delay.avg.ms.__cond_resched.__wait_for_common.barrier_all_devices.write_all_supers.btrfs_sync_log
> 0.05 ą 4% +18.7% 0.06 ą 4% perf-sched.sch_delay.avg.ms.io_schedule.folio_wait_bit_common.write_all_supers.btrfs_sync_log
> 0.04 ą 69% -100.0% 0.00 perf-sched.sch_delay.avg.ms.wait_extent_bit.__lock_extent.lock_and_cleanup_extent_if_need.copy_one_range
> 0.05 ą 69% -100.0% 0.00 perf-sched.sch_delay.max.ms.wait_extent_bit.__lock_extent.lock_and_cleanup_extent_if_need.copy_one_range
> 19.30 ą 9% +17.0% 22.59 ą 6% perf-sched.wait_and_delay.avg.ms.io_schedule.folio_wait_bit_common.write_all_supers.btrfs_sync_log
> 19.25 ą 9% +17.0% 22.53 ą 6% perf-sched.wait_time.avg.ms.io_schedule.folio_wait_bit_common.write_all_supers.btrfs_sync_log
> 15.74 ą107% -100.0% 0.00 perf-sched.wait_time.avg.ms.wait_extent_bit.__lock_extent.lock_and_cleanup_extent_if_need.copy_one_range
> 22.16 ą 79% -100.0% 0.00 perf-sched.wait_time.max.ms.wait_extent_bit.__lock_extent.lock_and_cleanup_extent_if_need.copy_one_range
> 4.44 +1.9% 4.53 perf-stat.i.MPKI
> 8.16 +0.2 8.41 perf-stat.i.branch-miss-rate%
> 2313 +1.6% 2350 perf-stat.i.context-switches
> 2.25 +1.9% 2.29 perf-stat.i.cpi
> 73.48 ą 3% -6.6% 68.61 perf-stat.i.cpu-migrations
> 1972 -2.1% 1931 perf-stat.i.minor-faults
> 1972 -2.1% 1931 perf-stat.i.page-faults
> 2308 +1.6% 2344 perf-stat.ps.context-switches
> 73.31 ą 3% -6.6% 68.45 perf-stat.ps.cpu-migrations
> 1967 -2.1% 1926 perf-stat.ps.minor-faults
> 1968 -2.1% 1926 perf-stat.ps.page-faults
>
>
>
>
>
> Disclaimer:
> Results have been estimated based on internal Intel analysis and are provided
> for informational purposes only. Any difference in system hardware or software
> design or configuration may affect actual performance.
>
>
> --
> 0-DAY CI Kernel Test Service
> https://github.com/intel/lkp-tests/wiki
>
>
Powered by blists - more mailing lists