lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Sat, 20 Jun 2020 22:38:45 +0800 From: kernel test robot <rong.a.chen@...el.com> To: Dan Williams <dan.j.williams@...el.com> Cc: Jeff Smits <jeff.smits@...el.com>, Doug Nelson <doug.nelson@...el.com>, Jan Kara <jack@...e.cz>, Jeff Moyer <jmoyer@...hat.com>, Matthew Wilcox <willy@...radead.org>, Johannes Thumshirn <jthumshirn@...e.de>, LKML <linux-kernel@...r.kernel.org>, lkp@...ts.01.org Subject: [fs/dax] 6370740e5f: fio.write_iops 395.6% improvement Greeting, FYI, we noticed a 395.6% improvement of fio.write_iops due to commit: commit: 6370740e5f8ef12de7f9a9bf48a0393d202cd827 ("fs/dax: Fix pmd vs pte conflict detection") https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git master in testcase: fio-basic on test machine: 96 threads Intel(R) Xeon(R) Gold 6252 CPU @ 2.10GHz with 256G memory with following parameters: disk: 2pmem fs: ext4 mount_option: dax runtime: 200s nr_task: 50% time_based: tb rw: write bs: 4k ioengine: mmap test_size: 200G cpufreq_governor: performance ucode: 0x500002c test-description: Fio is a tool that will spawn a number of threads or processes doing a particular type of I/O action as specified by the user. test-url: https://github.com/axboe/fio Details are as below: --------------------------------------------------------------------------------------------------> To reproduce: git clone https://github.com/intel/lkp-tests.git cd lkp-tests bin/lkp install job.yaml # job file is attached in this email bin/lkp run job.yaml ========================================================================================= bs/compiler/cpufreq_governor/disk/fs/ioengine/kconfig/mount_option/nr_task/rootfs/runtime/rw/tbox_group/test_size/testcase/time_based/ucode: 4k/gcc-9/performance/2pmem/ext4/mmap/x86_64-rhel-7.6/dax/50%/debian-x86_64-20191114.cgz/200s/write/lkp-csl-2sp6/200G/fio-basic/tb/0x500002c commit: v5.4-rc3 6370740e5f ("fs/dax: Fix pmd vs pte conflict detection") v5.4-rc3 6370740e5f8ef12de7f9a9bf48a ---------------- --------------------------- fail:runs %reproduction fail:runs | | | :4 25% 1:4 dmesg.WARNING:at#for_ip_interrupt_entry/0x 1:4 -16% 0:4 perf-profile.children.cycles-pp.error_entry %stddev %change %stddev \ | \ 0.40 ± 10% +9.1 9.47 ± 70% fio.latency_10us% 54.58 ± 4% -54.1 0.47 ± 39% fio.latency_20us% 0.01 ±100% +31.6 31.56 ± 46% fio.latency_2us% 0.01 +56.2 56.24 ± 11% fio.latency_4us% 0.01 -0.0 0.00 fio.latency_50ms% 44.90 ± 5% -44.6 0.27 ± 5% fio.latency_50us% 23120 -4.6% 22046 fio.time.involuntary_context_switches 4.687e+08 -89.2% 50425197 ± 12% fio.time.minor_page_faults 9082 -95.0% 452.32 ± 2% fio.time.system_time 532.66 ± 16% +1620.9% 9166 fio.time.user_time 23699 -10.9% 21120 fio.time.voluntary_context_switches 4.686e+08 +395.6% 2.322e+09 ± 12% fio.workload 9152 +395.6% 45355 ± 12% fio.write_bw_MBps 24512 ± 3% -84.3% 3852 ± 12% fio.write_clat_90%_us 26688 ± 3% -82.9% 4564 ± 12% fio.write_clat_95%_us 33952 ± 2% -74.0% 8816 ± 18% fio.write_clat_99%_us 20156 -85.9% 2841 ± 15% fio.write_clat_mean_us 8994 ± 20% -58.1% 3765 ± 12% fio.write_clat_stddev 2342964 +395.6% 11610937 ± 12% fio.write_iops 15439 ± 6% -66.9% 5104 ± 67% cpuidle.POLL.usage 50.63 -1.6% 49.80 iostat.cpu.idle 46.60 -94.5% 2.54 ± 2% iostat.cpu.system 2.77 ± 16% +1622.0% 47.66 iostat.cpu.user 0.00 ± 21% +0.0 0.00 ± 42% mpstat.cpu.all.soft% 47.07 -44.5 2.56 ± 2% mpstat.cpu.all.sys% 2.80 ± 16% +45.4 48.16 mpstat.cpu.all.usr% 50.00 -2.0% 49.00 vmstat.cpu.id 2.25 ± 19% +1988.9% 47.00 vmstat.cpu.us 2300185 -21.4% 1809018 vmstat.memory.cache 1893 -11.2% 1680 vmstat.system.cs 1375681 ± 2% -75.5% 337097 ± 28% numa-numastat.node0.local_node 1375852 ± 2% -74.0% 357229 ± 25% numa-numastat.node0.numa_hit 186.50 ± 30% +10695.7% 20134 ± 33% numa-numastat.node0.other_node 1117552 ± 4% -59.3% 454446 ± 19% numa-numastat.node1.local_node 1148592 ± 4% -59.5% 465482 ± 17% numa-numastat.node1.numa_hit 31041 -64.4% 11044 ± 60% numa-numastat.node1.other_node 218858 ± 2% -88.6% 24957 ± 22% numa-vmstat.node0.nr_page_table_pages 50798 ± 5% -72.3% 14047 ± 15% numa-vmstat.node0.nr_slab_reclaimable 1161 ± 57% +1680.9% 20676 ± 30% numa-vmstat.node0.numa_other 271534 ± 19% -25.5% 202374 ± 23% numa-vmstat.node1.nr_file_pages 235732 ± 3% -90.6% 22063 ± 23% numa-vmstat.node1.nr_page_table_pages 120164 ± 40% -56.1% 52795 ± 85% numa-vmstat.node1.nr_shmem 79733 ± 3% -86.7% 10622 ± 20% numa-vmstat.node1.nr_slab_reclaimable 1470265 ± 19% -42.1% 851431 ± 25% numa-vmstat.node1.numa_hit 1270970 ± 22% -47.1% 671876 ± 31% numa-vmstat.node1.numa_local 464060 ± 3% -15.3% 393219 meminfo.Active 441369 ± 3% -16.0% 370551 meminfo.Active(anon) 6547456 ± 6% +12.9% 7393280 ± 9% meminfo.DirectMap2M 522200 -81.1% 98704 meminfo.KReclaimable 8474727 -24.9% 6363349 meminfo.Memused 1826104 -89.7% 188450 ± 12% meminfo.PageTables 522200 -81.1% 98704 meminfo.SReclaimable 597205 ± 2% -11.5% 528428 meminfo.Shmem 702271 -60.7% 276327 meminfo.Slab 51746 -35.9% 33169 meminfo.max_used_kB 1634 -27.4% 1186 ± 4% slabinfo.dquot.active_objs 1634 -27.4% 1186 ± 4% slabinfo.dquot.num_objs 819.50 ± 10% +15.5% 946.25 ± 6% slabinfo.pool_workqueue.active_objs 830.50 ± 10% +13.9% 946.25 ± 6% slabinfo.pool_workqueue.num_objs 12999 -8.6% 11876 ± 6% slabinfo.proc_inode_cache.active_objs 12999 -8.6% 11877 ± 6% slabinfo.proc_inode_cache.num_objs 800204 -92.7% 58259 slabinfo.radix_tree_node.active_objs 14289 -92.7% 1040 slabinfo.radix_tree_node.active_slabs 800208 -92.7% 58264 slabinfo.radix_tree_node.num_objs 14289 -92.7% 1040 slabinfo.radix_tree_node.num_slabs 203228 ± 5% -72.3% 56197 ± 15% numa-meminfo.node0.KReclaimable 4091038 ± 4% -19.0% 3314297 ± 4% numa-meminfo.node0.MemUsed 879674 ± 2% -88.6% 99945 ± 22% numa-meminfo.node0.PageTables 203228 ± 5% -72.3% 56197 ± 15% numa-meminfo.node0.SReclaimable 299378 ± 5% -48.8% 153428 ± 9% numa-meminfo.node0.Slab 1087077 ± 19% -25.5% 809584 ± 23% numa-meminfo.node1.FilePages 318974 ± 3% -86.7% 42503 ± 20% numa-meminfo.node1.KReclaimable 4385419 ± 4% -30.5% 3049003 ± 4% numa-meminfo.node1.MemUsed 948009 ± 3% -90.7% 88429 ± 24% numa-meminfo.node1.PageTables 318974 ± 3% -86.7% 42503 ± 20% numa-meminfo.node1.SReclaimable 481598 ± 40% -56.1% 211268 ± 85% numa-meminfo.node1.Shmem 402895 ± 4% -69.5% 122893 ± 13% numa-meminfo.node1.Slab 110354 ± 3% -16.0% 92652 proc-vmstat.nr_active_anon 1007896 +5.2% 1060396 proc-vmstat.nr_dirty_background_threshold 2018257 +5.2% 2123385 proc-vmstat.nr_dirty_threshold 446916 -3.9% 429575 proc-vmstat.nr_file_pages 10160963 +5.2% 10686740 proc-vmstat.nr_free_pages 109917 +1.6% 111677 proc-vmstat.nr_mapped 453485 -89.7% 46880 ± 12% proc-vmstat.nr_page_table_pages 149326 ± 2% -11.6% 131976 proc-vmstat.nr_shmem 130550 -81.1% 24674 proc-vmstat.nr_slab_reclaimable 110354 ± 3% -16.0% 92652 proc-vmstat.nr_zone_active_anon 2547338 -66.7% 847861 ± 3% proc-vmstat.numa_hit 2516107 -67.5% 816682 ± 3% proc-vmstat.numa_local 18727 ± 5% +9.5% 20501 ± 4% proc-vmstat.numa_pages_migrated 41638 +4.4% 43462 ± 2% proc-vmstat.numa_pte_updates 71982 ± 10% -45.3% 39355 ± 2% proc-vmstat.pgactivate 2681188 -67.2% 878426 ± 2% proc-vmstat.pgalloc_normal 4.693e+08 -89.1% 50966143 ± 12% proc-vmstat.pgfault 2110747 ± 9% -65.1% 737475 ± 13% proc-vmstat.pgfree 18727 ± 5% +9.5% 20501 ± 4% proc-vmstat.pgmigrate_success 915526 -90.1% 90989 ± 13% proc-vmstat.thp_fault_fallback 32.91 ±110% +426.5% 173.30 ± 19% sched_debug.cfs_rq:/.MIN_vruntime.avg 3159 ±110% +316.0% 13144 ± 12% sched_debug.cfs_rq:/.MIN_vruntime.max 320.81 ±110% +362.3% 1483 ± 14% sched_debug.cfs_rq:/.MIN_vruntime.stddev 132.59 ± 9% +251.5% 466.09 ± 16% sched_debug.cfs_rq:/.exec_clock.min 32.91 ±110% +426.7% 173.36 ± 19% sched_debug.cfs_rq:/.max_vruntime.avg 3159 ±110% +316.1% 13149 ± 12% sched_debug.cfs_rq:/.max_vruntime.max 320.81 ±110% +362.5% 1483 ± 14% sched_debug.cfs_rq:/.max_vruntime.stddev 1.12 ± 11% +38.9% 1.56 ± 6% sched_debug.cfs_rq:/.nr_running.max 0.26 ± 8% +13.0% 0.30 ± 7% sched_debug.cfs_rq:/.nr_spread_over.avg 0.38 ± 3% +22.3% 0.46 ± 7% sched_debug.cfs_rq:/.nr_spread_over.stddev 637.19 ± 19% -69.9% 191.94 ± 57% sched_debug.cfs_rq:/.removed.load_avg.max 87.19 ± 21% -60.9% 34.08 ± 59% sched_debug.cfs_rq:/.removed.load_avg.stddev 29354 ± 19% -69.8% 8870 ± 57% sched_debug.cfs_rq:/.removed.runnable_sum.max 4014 ± 21% -61.0% 1566 ± 59% sched_debug.cfs_rq:/.removed.runnable_sum.stddev 5.82 ± 24% -52.8% 2.75 ± 62% sched_debug.cfs_rq:/.removed.util_avg.avg 309.50 ± 17% -69.3% 94.94 ± 57% sched_debug.cfs_rq:/.removed.util_avg.max 41.11 ± 22% -63.7% 14.93 ± 58% sched_debug.cfs_rq:/.removed.util_avg.stddev 480.51 ± 3% -8.9% 437.80 sched_debug.cfs_rq:/.util_est_enqueued.avg 600348 +35.5% 813756 sched_debug.cpu.avg_idle.avg 53051 ± 12% +225.5% 172705 ± 18% sched_debug.cpu.avg_idle.min 373602 -42.3% 215470 ± 2% sched_debug.cpu.avg_idle.stddev 2.76 ± 3% +787.4% 24.46 ± 13% sched_debug.cpu.clock.stddev 2.76 ± 3% +787.2% 24.46 ± 13% sched_debug.cpu.clock_task.stddev 0.00 ± 5% +577.5% 0.00 ± 4% sched_debug.cpu.next_balance.stddev 1.12 ± 11% +61.1% 1.81 ± 5% sched_debug.cpu.nr_running.max 3939 ± 2% -8.7% 3595 ± 3% sched_debug.cpu.nr_switches.avg 24621 ± 13% -34.4% 16144 ± 25% sched_debug.cpu.nr_switches.max 3567 ± 8% -28.2% 2560 ± 13% sched_debug.cpu.nr_switches.stddev 1806 -11.1% 1606 sched_debug.cpu.sched_count.avg 16641 ± 29% -46.0% 8987 ± 17% sched_debug.cpu.sched_count.max 2510 ± 16% -31.2% 1726 ± 9% sched_debug.cpu.sched_count.stddev 731.65 -12.6% 639.39 sched_debug.cpu.sched_goidle.avg 8255 ± 29% -47.5% 4333 ± 20% sched_debug.cpu.sched_goidle.max 26.25 ± 3% -60.2% 10.44 ± 10% sched_debug.cpu.sched_goidle.min 1282 ± 16% -30.1% 895.96 ± 8% sched_debug.cpu.sched_goidle.stddev 840.16 -11.5% 743.14 sched_debug.cpu.ttwu_count.avg 7916 ± 29% -52.7% 3747 ± 9% sched_debug.cpu.ttwu_count.max 131.75 ± 2% +20.4% 158.69 ± 16% sched_debug.cpu.ttwu_count.min 1210 ± 16% -42.4% 697.51 ± 6% sched_debug.cpu.ttwu_count.stddev 529.76 ± 2% -12.0% 465.99 sched_debug.cpu.ttwu_local.avg 7771 ± 31% -58.3% 3237 ± 12% sched_debug.cpu.ttwu_local.max 1054 ± 18% -49.2% 535.84 ± 6% sched_debug.cpu.ttwu_local.stddev 19.23 ± 11% +47.8% 28.41 perf-stat.i.MPKI 0.41 ± 6% -0.1 0.33 perf-stat.i.branch-miss-rate% 22070311 ± 3% -15.9% 18552514 ± 11% perf-stat.i.branch-misses 54.93 ± 2% +37.1 92.06 perf-stat.i.cache-miss-rate% 2.558e+08 ± 8% +190.4% 7.429e+08 ± 12% perf-stat.i.cache-misses 4.646e+08 ± 6% +73.0% 8.036e+08 ± 12% perf-stat.i.cache-references 1856 -11.2% 1649 perf-stat.i.context-switches 1.343e+11 +1.3% 1.36e+11 perf-stat.i.cpu-cycles 124.33 -7.0% 115.69 perf-stat.i.cpu-migrations 535.71 ± 7% -63.3% 196.43 ± 13% perf-stat.i.cycles-between-cache-misses 0.00 ± 41% -0.0 0.00 ± 58% perf-stat.i.dTLB-load-miss-rate% 154814 ± 34% -71.8% 43585 ± 9% perf-stat.i.dTLB-load-misses 0.44 -0.4 0.03 ± 2% perf-stat.i.dTLB-store-miss-rate% 12372203 -89.3% 1324695 ± 12% perf-stat.i.dTLB-store-misses 2.776e+09 +70.0% 4.721e+09 ± 11% perf-stat.i.dTLB-stores 72.99 ± 4% -37.4 35.60 ± 6% perf-stat.i.iTLB-load-miss-rate% 15571584 ± 11% -82.5% 2732580 ± 10% perf-stat.i.iTLB-load-misses 5727015 ± 5% -13.0% 4982904 perf-stat.i.iTLB-loads 1582 ± 13% +556.0% 10380 ± 6% perf-stat.i.instructions-per-iTLB-miss 1.40 +3.8% 1.45 perf-stat.i.metric.GHz 0.03 ± 7% +109.2% 0.06 ± 33% perf-stat.i.metric.K/sec 159.46 ± 3% +30.9% 208.69 ± 12% perf-stat.i.metric.M/sec 2328910 -89.2% 252478 ± 12% perf-stat.i.minor-faults 91.67 -23.6 68.09 perf-stat.i.node-load-miss-rate% 15222901 ± 18% -95.2% 735709 ± 9% perf-stat.i.node-load-misses 1375746 ± 28% -72.2% 382331 ± 9% perf-stat.i.node-loads 44301910 ± 25% +464.3% 2.5e+08 ± 11% perf-stat.i.node-store-misses 2328910 -89.2% 252478 ± 12% perf-stat.i.page-faults 19.27 ± 11% +48.0% 28.52 perf-stat.overall.MPKI 0.41 ± 6% -0.1 0.32 perf-stat.overall.branch-miss-rate% 55.00 ± 2% +37.5 92.46 perf-stat.overall.cache-miss-rate% 528.39 ± 8% -64.7% 186.35 ± 14% perf-stat.overall.cycles-between-cache-misses 0.00 ± 39% -0.0 0.00 ± 11% perf-stat.overall.dTLB-load-miss-rate% 0.44 -0.4 0.03 perf-stat.overall.dTLB-store-miss-rate% 72.91 ± 4% -37.6 35.31 ± 6% perf-stat.overall.iTLB-load-miss-rate% 1580 ± 14% +552.5% 10311 ± 6% perf-stat.overall.instructions-per-iTLB-miss 91.84 -25.9 65.94 ± 2% perf-stat.overall.node-load-miss-rate% 10384 ± 4% -76.6% 2433 perf-stat.overall.path-length 21960725 ± 3% -16.3% 18384384 ± 11% perf-stat.ps.branch-misses 2.545e+08 ± 8% +190.3% 7.389e+08 ± 12% perf-stat.ps.cache-misses 4.623e+08 ± 6% +72.9% 7.992e+08 ± 12% perf-stat.ps.cache-references 1847 -12.3% 1621 perf-stat.ps.context-switches 1.336e+11 +1.3% 1.354e+11 perf-stat.ps.cpu-cycles 123.73 -10.2% 111.07 perf-stat.ps.cpu-migrations 154069 ± 34% -72.3% 42673 ± 8% perf-stat.ps.dTLB-load-misses 12310345 -89.3% 1319449 ± 12% perf-stat.ps.dTLB-store-misses 2.762e+09 +70.0% 4.695e+09 ± 11% perf-stat.ps.dTLB-stores 15493689 ± 11% -82.5% 2716147 ± 10% perf-stat.ps.iTLB-load-misses 5698308 ± 5% -13.0% 4956823 perf-stat.ps.iTLB-loads 2317263 -89.1% 251531 ± 12% perf-stat.ps.minor-faults 15146656 ± 18% -95.2% 725824 ± 9% perf-stat.ps.node-load-misses 1368874 ± 28% -72.6% 374588 ± 9% perf-stat.ps.node-loads 44080864 ± 25% +465.0% 2.491e+08 ± 11% perf-stat.ps.node-store-misses 2317263 -89.1% 251531 ± 12% perf-stat.ps.page-faults 7888 -26.9% 5769 ± 23% interrupts.CPU11.NMI:Non-maskable_interrupts 7888 -26.9% 5769 ± 23% interrupts.CPU11.PMI:Performance_monitoring_interrupts 197.50 -30.8% 136.75 ± 29% interrupts.CPU11.RES:Rescheduling_interrupts 38.00 ± 75% +113.2% 81.00 ± 20% interrupts.CPU11.TLB:TLB_shootdowns 7341 ± 12% -34.8% 4789 ± 19% interrupts.CPU12.NMI:Non-maskable_interrupts 7341 ± 12% -34.8% 4789 ± 19% interrupts.CPU12.PMI:Performance_monitoring_interrupts 179.00 ± 18% -51.5% 86.75 ± 77% interrupts.CPU12.RES:Rescheduling_interrupts 7878 -27.2% 5732 ± 23% interrupts.CPU14.NMI:Non-maskable_interrupts 7878 -27.2% 5732 ± 23% interrupts.CPU14.PMI:Performance_monitoring_interrupts 197.00 -46.1% 106.25 ± 57% interrupts.CPU14.RES:Rescheduling_interrupts 7875 -39.0% 4800 ± 19% interrupts.CPU15.NMI:Non-maskable_interrupts 7875 -39.0% 4800 ± 19% interrupts.CPU15.PMI:Performance_monitoring_interrupts 202.50 ± 5% -48.9% 103.50 ± 53% interrupts.CPU15.RES:Rescheduling_interrupts 7341 ± 12% -21.6% 5757 ± 23% interrupts.CPU17.NMI:Non-maskable_interrupts 7341 ± 12% -21.6% 5757 ± 23% interrupts.CPU17.PMI:Performance_monitoring_interrupts 38.50 ± 32% +102.6% 78.00 ± 26% interrupts.CPU17.TLB:TLB_shootdowns 7395 ± 12% -54.8% 3341 ± 12% interrupts.CPU2.NMI:Non-maskable_interrupts 7395 ± 12% -54.8% 3341 ± 12% interrupts.CPU2.PMI:Performance_monitoring_interrupts 36.25 ± 22% +756.6% 310.50 ± 56% interrupts.CPU25.RES:Rescheduling_interrupts 11.25 ± 27% +1655.6% 197.50 ± 30% interrupts.CPU26.RES:Rescheduling_interrupts 125.25 ± 10% -53.5% 58.25 ± 53% interrupts.CPU26.TLB:TLB_shootdowns 48.00 ±145% +274.5% 179.75 ± 44% interrupts.CPU27.RES:Rescheduling_interrupts 31.50 ± 81% +523.0% 196.25 ± 26% interrupts.CPU28.RES:Rescheduling_interrupts 25.25 ±125% +494.1% 150.00 ± 58% interrupts.CPU29.RES:Rescheduling_interrupts 13.00 ± 93% +1244.2% 174.75 ± 50% interrupts.CPU30.RES:Rescheduling_interrupts 10.50 ± 44% +1726.2% 191.75 ± 54% interrupts.CPU32.RES:Rescheduling_interrupts 10.75 ± 32% +1520.9% 174.25 ± 49% interrupts.CPU33.RES:Rescheduling_interrupts 19.00 ± 59% +785.5% 168.25 ± 42% interrupts.CPU34.RES:Rescheduling_interrupts 10.25 ± 17% +1526.8% 166.75 ± 53% interrupts.CPU35.RES:Rescheduling_interrupts 3372 ± 42% +61.7% 5454 ± 39% interrupts.CPU36.NMI:Non-maskable_interrupts 3372 ± 42% +61.7% 5454 ± 39% interrupts.CPU36.PMI:Performance_monitoring_interrupts 12.00 ± 29% +1293.8% 167.25 ± 53% interrupts.CPU36.RES:Rescheduling_interrupts 3365 ± 42% +79.1% 6028 ± 31% interrupts.CPU37.NMI:Non-maskable_interrupts 3365 ± 42% +79.1% 6028 ± 31% interrupts.CPU37.PMI:Performance_monitoring_interrupts 18.75 ± 72% +702.7% 150.50 ± 52% interrupts.CPU37.RES:Rescheduling_interrupts 3372 ± 42% +90.2% 6413 ± 30% interrupts.CPU38.NMI:Non-maskable_interrupts 3372 ± 42% +90.2% 6413 ± 30% interrupts.CPU38.PMI:Performance_monitoring_interrupts 14.00 ± 29% +1151.8% 175.25 ± 53% interrupts.CPU38.RES:Rescheduling_interrupts 3368 ± 42% +62.5% 5472 ± 38% interrupts.CPU39.NMI:Non-maskable_interrupts 3368 ± 42% +62.5% 5472 ± 38% interrupts.CPU39.PMI:Performance_monitoring_interrupts 6893 ± 24% -60.9% 2698 ± 13% interrupts.CPU4.NMI:Non-maskable_interrupts 6893 ± 24% -60.9% 2698 ± 13% interrupts.CPU4.PMI:Performance_monitoring_interrupts 191.75 ± 20% -49.0% 97.75 ± 41% interrupts.CPU4.RES:Rescheduling_interrupts 16.00 ± 28% +789.1% 142.25 ± 61% interrupts.CPU41.RES:Rescheduling_interrupts 14.25 ± 25% +1112.3% 172.75 ± 51% interrupts.CPU42.RES:Rescheduling_interrupts 3017 ± 54% +119.7% 6628 ± 25% interrupts.CPU43.NMI:Non-maskable_interrupts 3017 ± 54% +119.7% 6628 ± 25% interrupts.CPU43.PMI:Performance_monitoring_interrupts 27.50 ± 70% +734.5% 229.50 ± 5% interrupts.CPU43.RES:Rescheduling_interrupts 3001 ± 54% +83.0% 5491 ± 39% interrupts.CPU44.NMI:Non-maskable_interrupts 3001 ± 54% +83.0% 5491 ± 39% interrupts.CPU44.PMI:Performance_monitoring_interrupts 28.00 ± 80% +535.7% 178.00 ± 47% interrupts.CPU44.RES:Rescheduling_interrupts 50.75 ±109% +269.0% 187.25 ± 25% interrupts.CPU45.RES:Rescheduling_interrupts 34.25 ± 38% +440.1% 185.00 ± 37% interrupts.CPU46.RES:Rescheduling_interrupts 116.50 ± 2% +111.2% 246.00 ± 18% interrupts.CPU47.RES:Rescheduling_interrupts 44.75 ± 82% +340.8% 197.25 ± 35% interrupts.CPU49.RES:Rescheduling_interrupts 7860 -35.5% 5069 ± 35% interrupts.CPU5.NMI:Non-maskable_interrupts 7860 -35.5% 5069 ± 35% interrupts.CPU5.PMI:Performance_monitoring_interrupts 247.75 ± 32% -55.7% 109.75 ± 52% interrupts.CPU5.RES:Rescheduling_interrupts 32.50 ± 21% +135.4% 76.50 ± 22% interrupts.CPU5.TLB:TLB_shootdowns 47.25 ± 77% +306.9% 192.25 ± 25% interrupts.CPU50.RES:Rescheduling_interrupts 51.00 ± 25% +59.8% 81.50 ± 27% interrupts.CPU50.TLB:TLB_shootdowns 41.25 ±102% +354.5% 187.50 ± 35% interrupts.CPU51.RES:Rescheduling_interrupts 3828 ± 29% +98.2% 7587 interrupts.CPU52.NMI:Non-maskable_interrupts 3828 ± 29% +98.2% 7587 interrupts.CPU52.PMI:Performance_monitoring_interrupts 41.00 ± 74% +411.6% 209.75 ± 23% interrupts.CPU52.RES:Rescheduling_interrupts 15.00 ± 32% +1260.0% 204.00 ± 18% interrupts.CPU53.RES:Rescheduling_interrupts 57.25 ±139% +230.1% 189.00 ± 43% interrupts.CPU54.RES:Rescheduling_interrupts 10.75 ± 81% +1744.2% 198.25 ± 35% interrupts.CPU56.RES:Rescheduling_interrupts 48.75 ± 83% +445.6% 266.00 ± 16% interrupts.CPU57.RES:Rescheduling_interrupts 31.00 ± 72% +304.0% 125.25 ± 41% interrupts.CPU58.RES:Rescheduling_interrupts 24.75 ± 95% +1548.5% 408.00 ± 70% interrupts.CPU59.RES:Rescheduling_interrupts 57.00 ± 62% +285.1% 219.50 ± 17% interrupts.CPU60.RES:Rescheduling_interrupts 42.00 ± 93% +367.9% 196.50 ± 31% interrupts.CPU61.RES:Rescheduling_interrupts 23.50 ± 75% +825.5% 217.50 ± 11% interrupts.CPU62.RES:Rescheduling_interrupts 28.00 ± 27% +863.4% 269.75 ± 27% interrupts.CPU63.RES:Rescheduling_interrupts 62.00 ± 17% +35.1% 83.75 ± 15% interrupts.CPU63.TLB:TLB_shootdowns 16.75 ± 11% +1114.9% 203.50 ± 19% interrupts.CPU64.RES:Rescheduling_interrupts 50.00 ±110% +279.5% 189.75 ± 32% interrupts.CPU65.RES:Rescheduling_interrupts 49.75 ± 78% +404.5% 251.00 ± 12% interrupts.CPU66.RES:Rescheduling_interrupts 41.25 ±108% +575.8% 278.75 ± 72% interrupts.CPU67.RES:Rescheduling_interrupts 28.00 ± 46% +507.1% 170.00 ± 31% interrupts.CPU68.RES:Rescheduling_interrupts 26.00 ± 75% +455.8% 144.50 ± 51% interrupts.CPU69.RES:Rescheduling_interrupts 40.50 ±104% +345.7% 180.50 ± 17% interrupts.CPU70.RES:Rescheduling_interrupts 36.25 ± 92% +778.6% 318.50 ± 32% interrupts.CPU71.RES:Rescheduling_interrupts 7879 -27.0% 5750 ± 23% interrupts.CPU8.NMI:Non-maskable_interrupts 7879 -27.0% 5750 ± 23% interrupts.CPU8.PMI:Performance_monitoring_interrupts 203.50 -50.2% 101.25 ± 68% interrupts.CPU90.RES:Rescheduling_interrupts 671.50 ± 10% -43.4% 380.00 ± 6% interrupts.IWI:IRQ_work_interrupts 11696 +40.8% 16467 ± 5% interrupts.RES:Rescheduling_interrupts 6735 ± 10% +479.4% 39025 ± 7% softirqs.CPU0.RCU 5876 ± 16% +562.3% 38920 ± 9% softirqs.CPU1.RCU 7122 ± 71% +196.8% 21139 ± 44% softirqs.CPU1.SCHED 61814 ± 5% +11.6% 69011 ± 2% softirqs.CPU1.TIMER 5511 ± 16% +618.9% 39620 ± 12% softirqs.CPU10.RCU 60479 ± 4% +20.3% 72729 ± 4% softirqs.CPU10.TIMER 6474 ± 39% +514.4% 39775 ± 10% softirqs.CPU11.RCU 4226 ± 35% +393.3% 20846 ± 43% softirqs.CPU11.SCHED 61801 ± 8% +16.9% 72255 ± 2% softirqs.CPU11.TIMER 6867 ± 25% +467.3% 38955 ± 4% softirqs.CPU12.RCU 62165 ± 5% +13.4% 70495 ± 3% softirqs.CPU12.TIMER 13622 ± 66% +186.2% 38983 ± 9% softirqs.CPU13.RCU 5685 ± 12% +598.1% 39689 ± 4% softirqs.CPU14.RCU 3463 ± 4% +441.2% 18741 ± 37% softirqs.CPU14.SCHED 60327 ± 4% +16.0% 69957 ± 2% softirqs.CPU14.TIMER 7291 ± 27% +421.6% 38033 ± 6% softirqs.CPU15.RCU 4843 ± 35% +357.7% 22168 ± 27% softirqs.CPU15.SCHED 6821 ± 26% +468.0% 38740 ± 9% softirqs.CPU16.RCU 5854 ± 14% +569.4% 39187 ± 9% softirqs.CPU17.RCU 60790 ± 3% +17.9% 71684 ± 2% softirqs.CPU17.TIMER 6077 ± 12% +555.2% 39821 ± 10% softirqs.CPU18.RCU 6698 ± 67% +213.1% 20975 ± 45% softirqs.CPU18.SCHED 6290 ± 11% +549.0% 40822 ± 7% softirqs.CPU19.RCU 6019 ± 12% +545.2% 38837 ± 3% softirqs.CPU2.RCU 61312 ± 4% +14.9% 70440 softirqs.CPU2.TIMER 6005 ± 13% +578.5% 40743 ± 6% softirqs.CPU20.RCU 60234 ± 3% +96.5% 118388 ± 70% softirqs.CPU20.TIMER 6059 ± 11% +593.9% 42047 ± 9% softirqs.CPU21.RCU 60564 ± 2% +17.0% 70852 ± 3% softirqs.CPU21.TIMER 6240 ± 13% +566.4% 41583 ± 9% softirqs.CPU22.RCU 5896 ± 14% +615.6% 42195 ± 6% softirqs.CPU23.RCU 60447 ± 3% +16.5% 70393 ± 2% softirqs.CPU23.TIMER 9971 ± 63% +351.4% 45007 ± 10% softirqs.CPU24.RCU 5138 ± 12% +731.6% 42732 ± 9% softirqs.CPU25.RCU 5442 ± 10% +703.9% 43752 ± 8% softirqs.CPU26.RCU 5348 ± 9% +730.8% 44428 ± 8% softirqs.CPU27.RCU 6779 ± 30% +540.5% 43420 ± 11% softirqs.CPU28.RCU 27123 ± 10% -67.7% 8768 ±110% softirqs.CPU28.SCHED 5194 ± 11% +753.6% 44338 ± 11% softirqs.CPU29.RCU 7499 ± 27% +436.7% 40253 ± 13% softirqs.CPU3.RCU 5292 ± 11% +742.5% 44593 ± 9% softirqs.CPU30.RCU 6377 ± 12% +546.6% 41235 ± 8% softirqs.CPU31.RCU 5440 ± 7% +730.2% 45165 ± 6% softirqs.CPU32.RCU 5586 ± 9% +705.6% 45007 ± 6% softirqs.CPU33.RCU 5600 ± 13% +667.7% 42994 ± 6% softirqs.CPU34.RCU 7064 ± 33% +530.1% 44511 ± 5% softirqs.CPU35.RCU 5530 ± 10% +686.2% 43480 ± 10% softirqs.CPU36.RCU 5552 ± 11% +704.7% 44677 ± 7% softirqs.CPU37.RCU 5414 ± 7% +751.4% 46092 ± 7% softirqs.CPU38.RCU 5567 ± 7% +722.9% 45811 ± 6% softirqs.CPU39.RCU 6871 ± 45% +462.0% 38614 ± 4% softirqs.CPU4.RCU 5416 ± 74% +332.4% 23421 ± 23% softirqs.CPU4.SCHED 62744 ± 7% +17.9% 73982 ± 5% softirqs.CPU4.TIMER 5427 ± 6% +715.1% 44233 ± 7% softirqs.CPU40.RCU 7295 ± 43% +463.8% 41129 ± 13% softirqs.CPU41.RCU 5403 ± 6% +716.6% 44118 ± 10% softirqs.CPU42.RCU 5911 ± 9% +684.8% 46392 ± 4% softirqs.CPU43.RCU 24159 ± 5% -85.3% 3544 ± 14% softirqs.CPU43.SCHED 5257 ± 16% +766.3% 45546 ± 7% softirqs.CPU44.RCU 5376 ± 16% +701.8% 43108 ± 6% softirqs.CPU45.RCU 5336 ± 9% +751.0% 45416 ± 7% softirqs.CPU46.RCU 8221 ± 51% +457.2% 45814 ± 4% softirqs.CPU47.RCU 13323 ± 4% -72.5% 3661 ± 22% softirqs.CPU47.SCHED 5829 ± 9% +687.8% 45927 ± 8% softirqs.CPU48.RCU 6052 ± 14% +626.2% 43951 ± 7% softirqs.CPU49.RCU 6598 ± 24% +493.0% 39123 ± 5% softirqs.CPU5.RCU 4081 ± 30% +447.6% 22352 ± 24% softirqs.CPU5.SCHED 61341 ± 5% +15.9% 71108 ± 2% softirqs.CPU5.TIMER 5291 ± 8% +646.0% 39468 ± 13% softirqs.CPU50.RCU 5363 ± 8% +629.5% 39123 ± 17% softirqs.CPU51.RCU 5272 ± 10% +710.9% 42755 ± 10% softirqs.CPU52.RCU 23573 ± 22% -77.5% 5302 ± 87% softirqs.CPU52.SCHED 5624 ± 10% +680.7% 43913 ± 6% softirqs.CPU53.RCU 25173 ± 5% -70.6% 7396 ± 75% softirqs.CPU53.SCHED 5345 ± 6% +724.1% 44048 ± 7% softirqs.CPU54.RCU 9771 ± 79% +341.5% 43141 ± 6% softirqs.CPU55.RCU 5304 ± 9% +739.4% 44528 ± 6% softirqs.CPU56.RCU 5271 ± 8% +732.0% 43857 ± 8% softirqs.CPU57.RCU 5215 ± 8% +616.2% 37354 ± 5% softirqs.CPU58.RCU 5475 ± 9% +681.8% 42805 ± 12% softirqs.CPU59.RCU 26325 ± 6% -66.8% 8740 ±121% softirqs.CPU59.SCHED 7352 ± 28% +453.1% 40667 ± 8% softirqs.CPU6.RCU 5318 ± 10% +715.8% 43387 ± 11% softirqs.CPU60.RCU 5192 ± 8% +754.0% 44342 ± 7% softirqs.CPU61.RCU 5149 ± 8% +741.4% 43324 ± 8% softirqs.CPU62.RCU 25034 ± 5% -59.2% 10218 ± 75% softirqs.CPU62.SCHED 5285 ± 7% +709.7% 42795 ± 8% softirqs.CPU63.RCU 25158 ± 5% -74.0% 6533 ±104% softirqs.CPU63.SCHED 5192 ± 9% +711.8% 42153 ± 7% softirqs.CPU64.RCU 5317 ± 9% +694.5% 42243 ± 10% softirqs.CPU65.RCU 5228 ± 5% +676.3% 40587 ± 12% softirqs.CPU66.RCU 5509 ± 14% +626.5% 40029 ± 16% softirqs.CPU67.RCU 5085 ± 9% +696.4% 40498 ± 8% softirqs.CPU68.RCU 5197 ± 6% +691.5% 41135 ± 10% softirqs.CPU69.RCU 5622 ± 22% +581.4% 38311 ± 5% softirqs.CPU7.RCU 5316 ± 14% +695.4% 42288 ± 10% softirqs.CPU70.RCU 5128 ± 12% +693.6% 40701 ± 10% softirqs.CPU71.RCU 5517 ± 12% +598.3% 38524 ± 8% softirqs.CPU72.RCU 5538 ± 10% +582.5% 37798 ± 6% softirqs.CPU73.RCU 2631 ± 3% +639.9% 19471 ± 51% softirqs.CPU73.SCHED 59985 +15.4% 69204 ± 4% softirqs.CPU73.TIMER 5784 ± 8% +558.4% 38085 ± 9% softirqs.CPU74.RCU 60425 +12.6% 68012 ± 4% softirqs.CPU74.TIMER 6162 ± 15% +522.8% 38377 ± 8% softirqs.CPU75.RCU 5729 ± 6% +546.3% 37029 ± 4% softirqs.CPU76.RCU 60517 +18.3% 71613 ± 3% softirqs.CPU76.TIMER 5533 ± 8% +620.4% 39863 ± 12% softirqs.CPU77.RCU 3586 ± 47% +411.7% 18351 ± 50% softirqs.CPU77.SCHED 60233 +93.0% 116280 ± 68% softirqs.CPU77.TIMER 5551 ± 9% +582.3% 37876 ± 9% softirqs.CPU78.RCU 2653 ± 3% +685.7% 20847 ± 50% softirqs.CPU78.SCHED 60012 +13.2% 67941 ± 4% softirqs.CPU78.TIMER 5539 ± 10% +600.9% 38824 ± 9% softirqs.CPU79.RCU 61107 ± 2% +13.7% 69503 ± 5% softirqs.CPU79.TIMER 5633 ± 11% +602.0% 39549 ± 7% softirqs.CPU8.RCU 60252 ± 4% +16.6% 70258 ± 3% softirqs.CPU8.TIMER 6084 ± 20% +606.2% 42969 ± 12% softirqs.CPU80.RCU 61607 ± 4% +26.6% 77989 ± 20% softirqs.CPU80.TIMER 5514 ± 7% +615.1% 39432 ± 12% softirqs.CPU81.RCU 2536 ± 6% +718.4% 20756 ± 50% softirqs.CPU81.SCHED 60328 +13.4% 68441 ± 5% softirqs.CPU81.TIMER 5470 ± 11% +620.1% 39394 ± 15% softirqs.CPU82.RCU 2644 ± 4% +678.2% 20581 ± 49% softirqs.CPU82.SCHED 60230 +15.3% 69440 ± 4% softirqs.CPU82.TIMER 5408 ± 10% +634.2% 39712 ± 13% softirqs.CPU83.RCU 59961 +14.5% 68686 ± 5% softirqs.CPU83.TIMER 5432 ± 13% +620.4% 39134 ± 16% softirqs.CPU84.RCU 2839 ± 7% +616.3% 20340 ± 48% softirqs.CPU84.SCHED 60058 +13.0% 67875 ± 4% softirqs.CPU84.TIMER 5298 ± 9% +662.6% 40403 ± 11% softirqs.CPU85.RCU 2694 ± 8% +562.7% 17858 ± 56% softirqs.CPU85.SCHED 59848 +12.9% 67539 ± 4% softirqs.CPU85.TIMER 5449 ± 10% +631.9% 39884 ± 12% softirqs.CPU86.RCU 60274 +12.7% 67917 ± 4% softirqs.CPU86.TIMER 5342 ± 10% +651.9% 40170 ± 13% softirqs.CPU87.RCU 60073 +13.6% 68250 ± 4% softirqs.CPU87.TIMER 5243 ± 9% +642.4% 38925 ± 15% softirqs.CPU88.RCU 2546 ± 5% +721.4% 20918 ± 50% softirqs.CPU88.SCHED 59504 +31.1% 77983 ± 21% softirqs.CPU88.TIMER 5292 ± 9% +648.8% 39628 ± 6% softirqs.CPU89.RCU 59780 +14.7% 68553 ± 4% softirqs.CPU89.TIMER 5723 ± 13% +580.2% 38931 ± 9% softirqs.CPU9.RCU 61634 ± 4% +14.9% 70841 ± 4% softirqs.CPU9.TIMER 5209 ± 9% +617.9% 37399 ± 8% softirqs.CPU90.RCU 59796 +16.7% 69760 ± 3% softirqs.CPU90.TIMER 5239 ± 10% +594.0% 36362 ± 2% softirqs.CPU91.RCU 2612 ± 3% +922.6% 26715 ± 2% softirqs.CPU91.SCHED 59742 +16.1% 69336 ± 3% softirqs.CPU91.TIMER 5333 ± 9% +653.7% 40194 ± 12% softirqs.CPU92.RCU 2676 ± 7% +673.3% 20694 ± 48% softirqs.CPU92.SCHED 59975 +14.9% 68938 ± 5% softirqs.CPU92.TIMER 5554 ± 14% +612.6% 39579 ± 14% softirqs.CPU93.RCU 3692 ± 36% +473.4% 21173 ± 48% softirqs.CPU93.SCHED 61510 ± 2% +15.1% 70774 ± 6% softirqs.CPU93.TIMER 5531 ± 6% +625.3% 40120 ± 11% softirqs.CPU94.RCU 60110 +15.6% 69461 ± 5% softirqs.CPU94.TIMER 5483 ± 15% +585.9% 37614 ± 3% softirqs.CPU95.RCU 13541 ± 6% +86.9% 25306 softirqs.CPU95.SCHED 567160 ± 11% +599.5% 3967127 softirqs.RCU 63.54 ± 9% -62.4 1.13 ± 12% perf-profile.calltrace.cycles-pp.__do_fault.do_fault.__handle_mm_fault.handle_mm_fault.do_user_addr_fault 63.56 ± 9% -62.4 1.16 ± 12% perf-profile.calltrace.cycles-pp.do_fault.__handle_mm_fault.handle_mm_fault.do_user_addr_fault.do_page_fault 63.51 ± 9% -62.4 1.12 ± 12% perf-profile.calltrace.cycles-pp.ext4_dax_huge_fault.__do_fault.do_fault.__handle_mm_fault.handle_mm_fault 63.70 ± 9% -60.0 3.74 ± 16% perf-profile.calltrace.cycles-pp.__handle_mm_fault.handle_mm_fault.do_user_addr_fault.do_page_fault.page_fault 64.17 ± 9% -60.0 4.21 ± 16% perf-profile.calltrace.cycles-pp.page_fault 63.82 ± 9% -60.0 3.86 ± 16% perf-profile.calltrace.cycles-pp.handle_mm_fault.do_user_addr_fault.do_page_fault.page_fault 64.12 ± 9% -59.9 4.19 ± 16% perf-profile.calltrace.cycles-pp.do_page_fault.page_fault 64.05 ± 9% -59.9 4.15 ± 16% perf-profile.calltrace.cycles-pp.do_user_addr_fault.do_page_fault.page_fault 49.61 ± 9% -48.8 0.79 ± 11% perf-profile.calltrace.cycles-pp.dax_iomap_pte_fault.ext4_dax_huge_fault.__do_fault.do_fault.__handle_mm_fault 42.83 ± 13% -42.6 0.18 ±173% perf-profile.calltrace.cycles-pp.__vm_insert_mixed.dax_iomap_pte_fault.ext4_dax_huge_fault.__do_fault.do_fault 42.25 ± 13% -42.1 0.17 ±173% perf-profile.calltrace.cycles-pp.track_pfn_insert.__vm_insert_mixed.dax_iomap_pte_fault.ext4_dax_huge_fault.__do_fault 42.24 ± 13% -42.1 0.17 ±173% perf-profile.calltrace.cycles-pp.lookup_memtype.track_pfn_insert.__vm_insert_mixed.dax_iomap_pte_fault.ext4_dax_huge_fault 40.35 ± 14% -40.3 0.00 perf-profile.calltrace.cycles-pp._raw_spin_lock.lookup_memtype.track_pfn_insert.__vm_insert_mixed.dax_iomap_pte_fault 39.75 ± 14% -39.8 0.00 perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.lookup_memtype.track_pfn_insert.__vm_insert_mixed 8.92 ± 36% -8.9 0.00 perf-profile.calltrace.cycles-pp.jbd2__journal_start.ext4_dax_huge_fault.__do_fault.do_fault.__handle_mm_fault 8.82 ± 36% -8.8 0.00 perf-profile.calltrace.cycles-pp.start_this_handle.jbd2__journal_start.ext4_dax_huge_fault.__do_fault.do_fault 6.04 ± 38% -6.0 0.00 perf-profile.calltrace.cycles-pp.ext4_iomap_begin.dax_iomap_pte_fault.ext4_dax_huge_fault.__do_fault.do_fault 0.29 ±100% +0.6 0.85 ± 18% perf-profile.calltrace.cycles-pp.ext4_dirty_inode.__mark_inode_dirty.generic_update_time.file_update_time.ext4_dax_huge_fault 0.00 +0.8 0.75 ± 31% perf-profile.calltrace.cycles-pp.smp_apic_timer_interrupt.apic_timer_interrupt 0.00 +0.8 0.77 ± 20% perf-profile.calltrace.cycles-pp.io_u_mark_complete 0.00 +0.8 0.82 ± 29% perf-profile.calltrace.cycles-pp.apic_timer_interrupt 0.00 +1.0 1.00 ± 18% perf-profile.calltrace.cycles-pp.__mark_inode_dirty.generic_update_time.file_update_time.ext4_dax_huge_fault.__handle_mm_fault 0.00 +1.0 1.01 ± 17% perf-profile.calltrace.cycles-pp.generic_update_time.file_update_time.ext4_dax_huge_fault.__handle_mm_fault.handle_mm_fault 0.00 +1.0 1.04 ± 26% perf-profile.calltrace.cycles-pp.get_io_u 0.00 +1.1 1.06 ± 20% perf-profile.calltrace.cycles-pp.file_update_time.ext4_dax_huge_fault.__handle_mm_fault.handle_mm_fault.do_user_addr_fault 0.00 +1.2 1.15 ± 19% perf-profile.calltrace.cycles-pp.dax_iomap_pmd_fault.ext4_dax_huge_fault.__handle_mm_fault.handle_mm_fault.do_user_addr_fault 0.00 +1.8 1.85 ± 33% perf-profile.calltrace.cycles-pp.ramp_time_over 0.00 +2.3 2.29 ± 30% perf-profile.calltrace.cycles-pp.io_u_mark_submit 0.00 +2.5 2.47 ± 19% perf-profile.calltrace.cycles-pp.ext4_dax_huge_fault.__handle_mm_fault.handle_mm_fault.do_user_addr_fault.do_page_fault 0.00 +4.3 4.31 ± 25% perf-profile.calltrace.cycles-pp.fio_gettime 0.00 +4.4 4.36 ± 21% perf-profile.calltrace.cycles-pp.td_io_queue 0.00 +5.0 5.05 ± 18% perf-profile.calltrace.cycles-pp.io_queue_event 0.00 +14.2 14.25 ± 19% perf-profile.calltrace.cycles-pp.io_u_sync_complete 33.52 ± 18% +23.7 57.19 ± 15% perf-profile.calltrace.cycles-pp.intel_idle.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry 33.51 ± 18% +23.7 57.23 ± 14% perf-profile.calltrace.cycles-pp.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry.start_secondary 33.52 ± 18% +23.7 57.25 ± 14% perf-profile.calltrace.cycles-pp.cpuidle_enter.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64 33.60 ± 18% +24.0 57.59 ± 14% perf-profile.calltrace.cycles-pp.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64 33.60 ± 18% +24.0 57.59 ± 14% perf-profile.calltrace.cycles-pp.start_secondary.secondary_startup_64 33.60 ± 18% +24.0 57.59 ± 14% perf-profile.calltrace.cycles-pp.cpu_startup_entry.start_secondary.secondary_startup_64 33.79 ± 18% +24.5 58.28 ± 14% perf-profile.calltrace.cycles-pp.secondary_startup_64 63.54 ± 9% -62.4 1.14 ± 12% perf-profile.children.cycles-pp.__do_fault 63.56 ± 9% -62.4 1.18 ± 12% perf-profile.children.cycles-pp.do_fault 63.54 ± 9% -59.9 3.59 ± 15% perf-profile.children.cycles-pp.ext4_dax_huge_fault 63.83 ± 9% -59.9 3.91 ± 16% perf-profile.children.cycles-pp.handle_mm_fault 63.70 ± 9% -59.9 3.78 ± 15% perf-profile.children.cycles-pp.__handle_mm_fault 64.18 ± 9% -59.9 4.26 ± 16% perf-profile.children.cycles-pp.page_fault 64.12 ± 9% -59.9 4.24 ± 16% perf-profile.children.cycles-pp.do_page_fault 64.06 ± 9% -59.9 4.19 ± 16% perf-profile.children.cycles-pp.do_user_addr_fault 49.61 ± 9% -48.8 0.79 ± 11% perf-profile.children.cycles-pp.dax_iomap_pte_fault 42.83 ± 13% -42.3 0.53 ± 21% perf-profile.children.cycles-pp.__vm_insert_mixed 42.24 ± 13% -41.2 0.99 ± 8% perf-profile.children.cycles-pp.lookup_memtype 42.25 ± 13% -41.2 1.01 ± 8% perf-profile.children.cycles-pp.track_pfn_insert 40.88 ± 14% -40.5 0.33 ± 24% perf-profile.children.cycles-pp._raw_spin_lock 39.78 ± 14% -39.7 0.07 ± 61% perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath 9.28 ± 35% -8.9 0.43 ± 26% perf-profile.children.cycles-pp.jbd2__journal_start 9.17 ± 35% -8.8 0.33 ± 29% perf-profile.children.cycles-pp.start_this_handle 6.05 ± 38% -5.7 0.38 ± 18% perf-profile.children.cycles-pp.ext4_iomap_begin 4.41 ± 34% -4.2 0.25 ± 25% perf-profile.children.cycles-pp.__ext4_journal_stop 4.22 ± 35% -4.0 0.23 ± 22% perf-profile.children.cycles-pp.jbd2_journal_stop 3.59 ± 30% -3.2 0.37 ± 5% perf-profile.children.cycles-pp._raw_read_lock 2.47 ± 35% -2.3 0.15 ± 24% perf-profile.children.cycles-pp.__ext4_journal_start_sb 2.42 ± 35% -2.3 0.14 ± 24% perf-profile.children.cycles-pp.ext4_journal_check_start 2.24 ± 35% -2.2 0.03 ±100% perf-profile.children.cycles-pp.add_transaction_credits 1.65 ± 8% -1.1 0.57 ± 6% perf-profile.children.cycles-pp.find_next_iomem_res 1.66 ± 8% -1.0 0.61 ± 7% perf-profile.children.cycles-pp.walk_system_ram_range 1.67 ± 8% -1.0 0.62 ± 7% perf-profile.children.cycles-pp.pat_pagerange_is_ram 0.28 ± 10% -0.1 0.14 ± 15% perf-profile.children.cycles-pp.sync_regs 0.15 ± 11% -0.1 0.03 ±105% perf-profile.children.cycles-pp.dax_insert_entry 0.21 ± 15% -0.1 0.11 ± 13% perf-profile.children.cycles-pp.rbt_memtype_lookup 0.11 ± 13% -0.1 0.04 ±100% perf-profile.children.cycles-pp.xas_store 0.04 ± 57% +0.0 0.07 ± 17% perf-profile.children.cycles-pp.__sb_start_write 0.08 ± 14% +0.0 0.12 ± 15% perf-profile.children.cycles-pp.get_unlocked_entry 0.07 +0.0 0.12 ± 18% perf-profile.children.cycles-pp.___perf_sw_event 0.00 +0.1 0.05 perf-profile.children.cycles-pp.ksys_read 0.00 +0.1 0.05 perf-profile.children.cycles-pp.vfs_read 0.06 ± 11% +0.1 0.12 ± 12% perf-profile.children.cycles-pp.xas_find_conflict 0.00 +0.1 0.06 ± 13% perf-profile.children.cycles-pp.__x64_sys_execve 0.00 +0.1 0.06 ± 13% perf-profile.children.cycles-pp.__do_execve_file 0.00 +0.1 0.07 ± 7% perf-profile.children.cycles-pp.ext4_inode_csum 0.00 +0.1 0.07 ± 13% perf-profile.children.cycles-pp.execve 0.00 +0.1 0.07 ± 12% perf-profile.children.cycles-pp.native_write_msr 0.00 +0.1 0.07 ± 19% perf-profile.children.cycles-pp.vmacache_find 0.00 +0.1 0.07 ± 16% perf-profile.children.cycles-pp.schedule 0.00 +0.1 0.08 ± 30% perf-profile.children.cycles-pp.in_ramp_time 0.01 ±173% +0.1 0.09 ± 64% perf-profile.children.cycles-pp.tick_nohz_next_event 0.00 +0.1 0.08 ± 5% perf-profile.children.cycles-pp.ext4_inode_csum_set 0.00 +0.1 0.08 ± 19% perf-profile.children.cycles-pp.find_vma 0.00 +0.1 0.08 ± 23% perf-profile.children.cycles-pp.perf_mux_hrtimer_handler 0.00 +0.1 0.08 ± 10% perf-profile.children.cycles-pp.__sched_text_start 0.03 ±100% +0.1 0.11 ± 68% perf-profile.children.cycles-pp.tick_nohz_get_sleep_length 0.00 +0.1 0.08 ± 28% perf-profile.children.cycles-pp.ret_from_fork 0.00 +0.1 0.08 ± 28% perf-profile.children.cycles-pp.kthread 0.00 +0.1 0.09 ± 13% perf-profile.children.cycles-pp.poll 0.00 +0.1 0.09 ± 13% perf-profile.children.cycles-pp.__x64_sys_poll 0.00 +0.1 0.09 ± 13% perf-profile.children.cycles-pp.do_sys_poll 0.02 ±173% +0.1 0.12 ± 12% perf-profile.children.cycles-pp.ext4_data_block_valid 0.00 +0.1 0.10 ± 33% perf-profile.children.cycles-pp.rcu_core 0.00 +0.1 0.11 ± 35% perf-profile.children.cycles-pp._raw_spin_lock_irqsave 0.00 +0.1 0.11 ± 38% perf-profile.children.cycles-pp.rcu_sched_clock_irq 0.00 +0.1 0.11 ± 14% perf-profile.children.cycles-pp.copyin 0.00 +0.1 0.12 ± 13% perf-profile.children.cycles-pp.ext4_data_block_valid_rcu 0.00 +0.1 0.12 ± 13% perf-profile.children.cycles-pp.copy_user_enhanced_fast_string 0.00 +0.1 0.12 ± 16% perf-profile.children.cycles-pp.iov_iter_copy_from_user_atomic 0.00 +0.1 0.12 ± 22% perf-profile.children.cycles-pp.put_file 0.00 +0.1 0.13 ± 38% perf-profile.children.cycles-pp.update_load_avg 0.00 +0.1 0.14 ± 29% perf-profile.children.cycles-pp.td_io_prep 0.00 +0.1 0.14 ± 22% perf-profile.children.cycles-pp.ntime_since 0.08 ± 17% +0.2 0.25 ± 39% perf-profile.children.cycles-pp.unlock_file 0.00 +0.2 0.17 ± 14% perf-profile.children.cycles-pp.__ext4_get_inode_loc 0.04 ±100% +0.2 0.23 ± 62% perf-profile.children.cycles-pp.menu_select 0.06 ± 7% +0.2 0.26 ± 14% perf-profile.children.cycles-pp.__generic_file_write_iter 0.06 ± 7% +0.2 0.26 ± 14% perf-profile.children.cycles-pp.generic_perform_write 0.06 ± 7% +0.2 0.26 ± 13% perf-profile.children.cycles-pp.generic_file_write_iter 0.00 +0.2 0.21 ± 26% perf-profile.children.cycles-pp.__get_io_u 0.11 ± 17% +0.2 0.32 ± 18% perf-profile.children.cycles-pp.ext4_do_update_inode 0.11 ± 17% +0.2 0.33 ± 20% perf-profile.children.cycles-pp.ext4_mark_iloc_dirty 0.06 ± 7% +0.2 0.27 ± 14% perf-profile.children.cycles-pp.new_sync_write 0.06 ± 6% +0.2 0.28 ± 12% perf-profile.children.cycles-pp.perf_mmap__push 0.06 ± 11% +0.2 0.28 ± 13% perf-profile.children.cycles-pp.__GI___libc_write 0.06 ± 7% +0.2 0.28 ± 14% perf-profile.children.cycles-pp.vfs_write 0.06 ± 11% +0.2 0.29 ± 13% perf-profile.children.cycles-pp.ksys_write 0.00 +0.2 0.23 ± 17% perf-profile.children.cycles-pp.ext4_reserve_inode_write 0.08 ± 5% +0.3 0.37 ± 13% perf-profile.children.cycles-pp.cmd_record 0.08 +0.3 0.38 ± 10% perf-profile.children.cycles-pp.__libc_start_main 0.08 +0.3 0.38 ± 10% perf-profile.children.cycles-pp.main 0.03 ±102% +0.3 0.34 ± 65% perf-profile.children.cycles-pp.__softirqentry_text_start 0.68 ± 16% +0.3 1.02 ± 18% perf-profile.children.cycles-pp.__mark_inode_dirty 0.51 ± 17% +0.3 0.86 ± 18% perf-profile.children.cycles-pp.ext4_dirty_inode 0.07 ± 12% +0.3 0.42 ± 39% perf-profile.children.cycles-pp.task_tick_fair 0.05 ±100% +0.4 0.40 ± 63% perf-profile.children.cycles-pp.irq_exit 0.71 ± 16% +0.4 1.09 ± 20% perf-profile.children.cycles-pp.file_update_time 0.64 ± 16% +0.4 1.02 ± 18% perf-profile.children.cycles-pp.generic_update_time 0.09 ± 17% +0.4 0.51 ± 38% perf-profile.children.cycles-pp.scheduler_tick 0.01 ±173% +0.4 0.45 ± 29% perf-profile.children.cycles-pp.add_clat_sample 0.14 ± 18% +0.4 0.58 ± 18% perf-profile.children.cycles-pp.ext4_mark_inode_dirty 0.06 ± 11% +0.5 0.51 ± 28% perf-profile.children.cycles-pp.utime_since_now 0.07 ± 13% +0.5 0.53 ± 28% perf-profile.children.cycles-pp.add_lat_sample 0.00 +0.5 0.53 ± 20% perf-profile.children.cycles-pp.vmf_insert_pfn_pmd 0.14 ± 21% +0.6 0.77 ± 39% perf-profile.children.cycles-pp.update_process_times 0.14 ± 21% +0.7 0.79 ± 39% perf-profile.children.cycles-pp.tick_sched_handle 0.11 ± 4% +0.7 0.76 ± 7% perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe 0.10 ± 4% +0.7 0.76 ± 7% perf-profile.children.cycles-pp.do_syscall_64 0.17 ± 25% +0.7 0.83 ± 38% perf-profile.children.cycles-pp.tick_sched_timer 0.00 +0.7 0.68 ± 22% perf-profile.children.cycles-pp.io_u_mark_depth 0.00 +0.8 0.78 ± 20% perf-profile.children.cycles-pp.io_u_mark_complete 0.21 ± 21% +0.8 0.99 ± 35% perf-profile.children.cycles-pp.__hrtimer_run_queues 0.38 ± 34% +0.9 1.26 ± 33% perf-profile.children.cycles-pp.hrtimer_interrupt 0.17 ± 17% +0.9 1.09 ± 26% perf-profile.children.cycles-pp.get_io_u 0.00 +1.2 1.15 ± 19% perf-profile.children.cycles-pp.dax_iomap_pmd_fault 0.46 ± 40% +1.2 1.70 ± 40% perf-profile.children.cycles-pp.smp_apic_timer_interrupt 0.51 ± 38% +1.3 1.82 ± 38% perf-profile.children.cycles-pp.apic_timer_interrupt 0.00 +1.6 1.56 ± 21% perf-profile.children.cycles-pp.ramp_time_over 0.07 ± 22% +1.9 1.98 ± 20% perf-profile.children.cycles-pp.io_u_mark_submit 0.18 ± 11% +4.2 4.35 ± 25% perf-profile.children.cycles-pp.fio_gettime 0.16 ± 11% +5.2 5.40 ± 21% perf-profile.children.cycles-pp.td_io_queue 0.00 +7.2 7.17 ± 19% perf-profile.children.cycles-pp.io_queue_event 0.09 ± 9% +13.2 13.24 ± 19% perf-profile.children.cycles-pp.io_u_sync_complete 33.52 ± 18% +23.7 57.20 ± 15% perf-profile.children.cycles-pp.intel_idle 33.60 ± 18% +24.0 57.59 ± 14% perf-profile.children.cycles-pp.start_secondary 33.70 ± 18% +24.2 57.93 ± 14% perf-profile.children.cycles-pp.cpuidle_enter_state 33.70 ± 18% +24.2 57.94 ± 14% perf-profile.children.cycles-pp.cpuidle_enter 33.79 ± 18% +24.5 58.28 ± 14% perf-profile.children.cycles-pp.secondary_startup_64 33.79 ± 18% +24.5 58.28 ± 14% perf-profile.children.cycles-pp.cpu_startup_entry 33.79 ± 18% +24.5 58.29 ± 14% perf-profile.children.cycles-pp.do_idle 39.56 ± 14% -39.5 0.07 ± 61% perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath 4.93 ± 37% -4.8 0.12 ± 31% perf-profile.self.cycles-pp.start_this_handle 4.18 ± 35% -4.0 0.15 ± 27% perf-profile.self.cycles-pp.jbd2_journal_stop 3.56 ± 30% -3.2 0.37 ± 5% perf-profile.self.cycles-pp._raw_read_lock 2.36 ± 35% -2.2 0.12 ± 20% perf-profile.self.cycles-pp.ext4_journal_check_start 2.23 ± 35% -2.2 0.03 ±100% perf-profile.self.cycles-pp.add_transaction_credits 1.10 ± 8% -0.8 0.26 ± 19% perf-profile.self.cycles-pp._raw_spin_lock 1.18 ± 8% -0.8 0.40 ± 20% perf-profile.self.cycles-pp.find_next_iomem_res 0.27 ± 10% -0.1 0.14 ± 15% perf-profile.self.cycles-pp.sync_regs 0.21 ± 14% -0.1 0.11 ± 13% perf-profile.self.cycles-pp.rbt_memtype_lookup 0.06 ± 9% +0.0 0.09 ± 17% perf-profile.self.cycles-pp.___perf_sw_event 0.04 ± 58% +0.1 0.10 ± 11% perf-profile.self.cycles-pp.xas_find_conflict 0.00 +0.1 0.07 ± 12% perf-profile.self.cycles-pp.native_write_msr 0.00 +0.1 0.07 ± 12% perf-profile.self.cycles-pp.__sb_start_write 0.00 +0.1 0.07 ± 19% perf-profile.self.cycles-pp.vmacache_find 0.00 +0.1 0.07 ± 19% perf-profile.self.cycles-pp.kmem_cache_alloc 0.00 +0.1 0.07 ± 41% perf-profile.self.cycles-pp.task_tick_fair 0.00 +0.1 0.08 ± 29% perf-profile.self.cycles-pp._raw_spin_lock_irqsave 0.00 +0.1 0.08 ± 30% perf-profile.self.cycles-pp.in_ramp_time 0.00 +0.1 0.11 ± 32% perf-profile.self.cycles-pp.td_io_prep 0.00 +0.1 0.11 ± 15% perf-profile.self.cycles-pp.copy_user_enhanced_fast_string 0.00 +0.1 0.11 ± 26% perf-profile.self.cycles-pp.put_file 0.00 +0.1 0.12 ± 13% perf-profile.self.cycles-pp.ext4_data_block_valid_rcu 0.00 +0.1 0.12 ± 30% perf-profile.self.cycles-pp.ext4_do_update_inode 0.00 +0.1 0.14 ± 22% perf-profile.self.cycles-pp.ntime_since 0.08 ± 17% +0.1 0.23 ± 42% perf-profile.self.cycles-pp.unlock_file 0.00 +0.2 0.20 ± 27% perf-profile.self.cycles-pp.__get_io_u 0.00 +0.2 0.23 ± 29% perf-profile.self.cycles-pp.io_u_mark_depth 0.06 ± 9% +0.4 0.50 ± 29% perf-profile.self.cycles-pp.utime_since_now 0.00 +0.4 0.45 ± 29% perf-profile.self.cycles-pp.add_clat_sample 0.07 ± 13% +0.5 0.52 ± 29% perf-profile.self.cycles-pp.add_lat_sample 0.00 +0.8 0.76 ± 20% perf-profile.self.cycles-pp.io_u_mark_complete 0.17 ± 17% +0.9 1.06 ± 26% perf-profile.self.cycles-pp.get_io_u 0.00 +1.1 1.05 ± 22% perf-profile.self.cycles-pp.ramp_time_over 0.03 ±100% +1.4 1.43 ± 20% perf-profile.self.cycles-pp.io_u_mark_submit 0.18 ± 10% +4.1 4.27 ± 24% perf-profile.self.cycles-pp.fio_gettime 0.16 ± 10% +5.1 5.30 ± 21% perf-profile.self.cycles-pp.td_io_queue 0.00 +7.1 7.10 ± 19% perf-profile.self.cycles-pp.io_queue_event 0.08 ± 12% +10.9 10.95 ± 19% perf-profile.self.cycles-pp.io_u_sync_complete 33.52 ± 18% +23.7 57.19 ± 15% perf-profile.self.cycles-pp.intel_idle fio.write_bw_MBps 55000 +-------------------------------------------------------------------+ 50000 |-+ O O O O O | | O O O O O O | 45000 |-+O O O O O O O | 40000 |-+ | | O O | 35000 |-+ | 30000 |-+ | 25000 |-+ | | | 20000 |-+ | 15000 |-+ | | | 10000 |..+..+..+..+..+...+..+..+..+..+..+..+..+..+..+..+...+..+..+..+..+..| 5000 +-------------------------------------------------------------------+ fio.write_iops 1.4e+07 +-----------------------------------------------------------------+ | O O O O O | 1.2e+07 |-+ O O O O O | | O O O O O O O O | | | 1e+07 |-+ | | O O | 8e+06 |-+ | | | 6e+06 |-+ | | | | | 4e+06 |-+ | | | 2e+06 +-----------------------------------------------------------------+ fio.write_clat_mean_us 22000 +-------------------------------------------------------------------+ 20000 |..+..+..+..+..+...+..+..+..+..+..+..+..+..+..+..+...+..+..+..+..+..| | | 18000 |-+ | 16000 |-+ | | | 14000 |-+ | 12000 |-+ | 10000 |-+ | | | 8000 |-+ | 6000 |-+ | | | 4000 |-+O O O O O O O O O O | 2000 +-------------------------------------------------------------------+ fio.write_clat_90__us 30000 +-------------------------------------------------------------------+ | | 25000 |..+..+.. .+.. .+..+..+.. .+..+..+.. .+.. ..+.. .+.. .+..| | +. +...+. +. +. +. +. +. | | | 20000 |-+ | | | 15000 |-+ | | | 10000 |-+ | | | | | 5000 |-+O O O O O O O O O O O O O O O O O O O O | | | 0 +-------------------------------------------------------------------+ fio.write_clat_95__us 30000 +-------------------------------------------------------------------+ |..+..+.. .+.. .+..+..+.. .+..+.. .+.. ..+.. .+.. .+..| 25000 |-+ +. +...+. +..+. +. +. +. +. | | | | | 20000 |-+ | | | 15000 |-+ | | | 10000 |-+ | | | | O O | 5000 |-+O O O O O O O O O O O O O O O O O O | | | 0 +-------------------------------------------------------------------+ fio.write_clat_99__us 40000 +-------------------------------------------------------------------+ | +.. +.. .+.. .+.. | 35000 |.. +.. .. ..+..+. +.. .+. .+..+...+.. .+..+..+..| | + +. +..+. +. +. | 30000 |-+ | | | 25000 |-+ | | | 20000 |-+ | | | 15000 |-+ | | O O | 10000 |-+O O O O | | O O O O O O O O O O O O O O | 5000 +-------------------------------------------------------------------+ fio.latency_4us_ 80 +----------------------------------------------------------------------+ | O | 70 |-+O O O O | 60 |-+ O O | | O O O O | 50 |-+ O O O O | | O | 40 |-+ O O | | | 30 |-+ | 20 |-+ | | | 10 |-+ | | | 0 +----------------------------------------------------------------------+ fio.latency_10us_ 25 +----------------------------------------------------------------------+ | | | O O | 20 |-+ | | | | | 15 |-+ | | | 10 |-+ | | | | O O O O O O O | 5 |-+ O O O O O O O | | O O O O | | .+.. .+.. | 0 +----------------------------------------------------------------------+ fio.latency_20us_ 60 +----------------------------------------------------------------------+ | .+.. .+.. .+.. ..+.. .+.. ..+.. .| 50 |..+..+... .+. +...+..+..+..+...+. +. +. +. +. +. | | +. | | | 40 |-+ | | | 30 |-+ | | | 20 |-+ | | | | | 10 |-+ | | | 0 +----------------------------------------------------------------------+ fio.latency_50us_ 60 +----------------------------------------------------------------------+ | | 50 |.. ..+.. | | +..+. +.. .+...+..+..+..+...+.. .+..+..+... .+.. .+... .+..| | +. +. +. +. +. | 40 |-+ | | | 30 |-+ | | | 20 |-+ | | | | | 10 |-+ | | | 0 +----------------------------------------------------------------------+ fio.workload 3e+09 +-----------------------------------------------------------------+ | | 2.5e+09 |-+ O O O O O O O | | O O O O O O O O O O | | O | 2e+09 |-+ O O | | | 1.5e+09 |-+ | | | 1e+09 |-+ | | | | | 5e+08 |..+..+..+..+..+..+..+..+..+..+..+..+..+..+..+..+..+..+..+..+..+..| | | 0 +-----------------------------------------------------------------+ fio.time.user_time 10000 +-------------------------------------------------------------------+ 9000 |-+O O O O O O O O O O O O O O O O O O O O | | | 8000 |-+ | 7000 |-+ | | | 6000 |-+ | 5000 |-+ | 4000 |-+ | | | 3000 |-+ | 2000 |-+ | | | 1000 |..+..+..+..+..+...+..+..+..+..+..+..+..+..+..+..+...+..+..+..+..+..| 0 +-------------------------------------------------------------------+ fio.time.system_time 10000 +-------------------------------------------------------------------+ 9000 |..+..+..+..+..+...+..+..+..+..+..+..+..+..+..+..+...+..+..+..+..+..| | | 8000 |-+ | 7000 |-+ | | | 6000 |-+ | 5000 |-+ | 4000 |-+ | | | 3000 |-+ | 2000 |-+ | | | 1000 |-+O O O O O O O O O O O O O O O O O O O O | 0 +-------------------------------------------------------------------+ fio.time.minor_page_faults 5e+08 +-----------------------------------------------------------------+ 4.5e+08 |..+..+..+..+..+..+..+..+..+..+..+..+..+..+..+..+..+..+..+..+..+..| | | 4e+08 |-+ | 3.5e+08 |-+ | | | 3e+08 |-+ | 2.5e+08 |-+ | 2e+08 |-+ | | | 1.5e+08 |-+ | 1e+08 |-+ | | O O O O O O O O O | 5e+07 |-+O O O O O O O O O O O | 0 +-----------------------------------------------------------------+ fio.time.voluntary_context_switches 24000 +-------------------------------------------------------------------+ |..+..+..+..+..+...+..+..+..+..+..+..+..+.. .+..+... .+..+..+.. .| 23500 |-+ +. +. +. | | | 23000 |-+ | | | 22500 |-+ | | | 22000 |-+ | | | 21500 |-+ O O | | O O O O O O O O O O O O O | 21000 |-+ O | | O O O O | 20500 +-------------------------------------------------------------------+ [*] bisect-good sample [O] bisect-bad sample Disclaimer: Results have been estimated based on internal Intel analysis and are provided for informational purposes only. Any difference in system hardware or software design or configuration may affect actual performance. Thanks, Rong Chen View attachment "config-5.4.0-rc3-00001-g6370740e5f8ef" of type "text/plain" (197768 bytes) View attachment "job-script" of type "text/plain" (8166 bytes) View attachment "job.yaml" of type "text/plain" (5737 bytes) View attachment "reproduce" of type "text/plain" (937 bytes)
Powered by blists - more mailing lists