lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Tue, 12 Jan 2021 22:10:45 +0800 From: kernel test robot <oliver.sang@...el.com> To: Sami Tolvanen <samitolvanen@...gle.com> Cc: Kees Cook <keescook@...omium.org>, Bjorn Helgaas <bhelgaas@...gle.com>, LKML <linux-kernel@...r.kernel.org>, lkp@...ts.01.org, lkp@...el.com, ying.huang@...el.com, feng.tang@...el.com, zhengjun.xing@...el.com Subject: [PCI] dc83615370: will-it-scale.per_process_ops -1.2% regression Greeting, FYI, we noticed a -1.2% regression of will-it-scale.per_process_ops due to commit: commit: dc83615370e7ebcb181a21a8ad13a77c278ab81c ("PCI: Fix PREL32 relocations for LTO") https://git.kernel.org/cgit/linux/kernel/git/kees/linux.git for-next/kspp in testcase: will-it-scale on test machine: 104 threads Skylake with 192G memory with following parameters: nr_task: 50% mode: process test: mmap1 cpufreq_governor: performance ucode: 0x2006a08 test-description: Will It Scale takes a testcase and runs it from 1 through to n parallel copies to see if the testcase will scale. It builds both a process and threads based test in order to see any differences between the two. test-url: https://github.com/antonblanchard/will-it-scale In addition to that, the commit also has significant impact on the following tests: +------------------+-------------------------------------------------------------------+ | testcase: change | reaim: reaim.jobs_per_min -1.3% regression | | test machine | 16 threads Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz with 32G memory | | test parameters | cpufreq_governor=performance | | | nr_task=100% | | | runtime=300s | | | test=new_dbase | | | ucode=0xde | +------------------+-------------------------------------------------------------------+ If you fix the issue, kindly add following tag Reported-by: kernel test robot <oliver.sang@...el.com> Details are as below: --------------------------------------------------------------------------------------------------> To reproduce: git clone https://github.com/intel/lkp-tests.git cd lkp-tests bin/lkp install job.yaml # job file is attached in this email bin/lkp run job.yaml ========================================================================================= compiler/cpufreq_governor/kconfig/mode/nr_task/rootfs/tbox_group/test/testcase/ucode: gcc-9/performance/x86_64-rhel-8.3/process/50%/debian-10.4-x86_64-20200603.cgz/lkp-skl-fpga01/mmap1/will-it-scale/0x2006a08 commit: a51d9615ff ("init: lto: fix PREL32 relocations") dc83615370 ("PCI: Fix PREL32 relocations for LTO") a51d9615ffb5559c dc83615370e7ebcb181a21a8ad1 ---------------- --------------------------- %stddev %change %stddev \ | \ 21214909 -1.2% 20965820 will-it-scale.52.processes 407978 -1.2% 403188 will-it-scale.per_process_ops 21214909 -1.2% 20965820 will-it-scale.workload 220218 ±139% +4481.9% 10090202 ± 57% cpuidle.C6.usage 3713 ± 5% -3.6% 3578 ± 5% sched_debug.cpu.nr_switches.avg 356911 ± 6% -9.6% 322794 ± 2% numa-numastat.node0.local_node 45147 ± 51% +99.7% 90173 ± 6% numa-numastat.node0.other_node 48643 ± 47% -92.7% 3532 ±164% numa-numastat.node1.other_node 18557 -1.3% 18311 proc-vmstat.nr_kernel_stack 8073 ± 61% +93.2% 15595 ± 23% proc-vmstat.numa_hint_faults 958803 +0.9% 967277 proc-vmstat.pgfault 0.03 ± 3% +16.8% 0.04 ± 8% perf-sched.wait_and_delay.avg.ms.preempt_schedule_common._cond_resched.unmap_page_range.unmap_vmas.unmap_region 684.53 ± 2% -15.3% 579.57 ± 10% perf-sched.wait_and_delay.avg.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait 0.13 ± 28% +218.0% 0.42 ± 68% perf-sched.wait_and_delay.max.ms.preempt_schedule_common._cond_resched.unmap_page_range.unmap_vmas.unmap_region 0.03 ± 3% +16.8% 0.04 ± 8% perf-sched.wait_time.avg.ms.preempt_schedule_common._cond_resched.unmap_page_range.unmap_vmas.unmap_region 684.52 ± 2% -15.3% 579.56 ± 10% perf-sched.wait_time.avg.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait 0.13 ± 28% +218.0% 0.42 ± 68% perf-sched.wait_time.max.ms.preempt_schedule_common._cond_resched.unmap_page_range.unmap_vmas.unmap_region 1310 ± 4% -9.7% 1183 ± 4% slabinfo.dmaengine-unmap-16.active_objs 1310 ± 4% -9.7% 1183 ± 4% slabinfo.dmaengine-unmap-16.num_objs 35004 ± 2% -8.2% 32147 ± 6% slabinfo.vm_area_struct.active_objs 1752 ± 2% -8.1% 1609 ± 6% slabinfo.vm_area_struct.active_slabs 35050 ± 2% -8.1% 32199 ± 6% slabinfo.vm_area_struct.num_objs 1752 ± 2% -8.1% 1609 ± 6% slabinfo.vm_area_struct.num_slabs 2898 ± 14% -27.6% 2099 ± 6% numa-meminfo.node0.Active 2858 ± 14% -33.0% 1915 ± 7% numa-meminfo.node0.Active(anon) 90099 ± 38% +66.5% 150033 ± 5% numa-meminfo.node0.AnonPages 158948 ± 62% +93.8% 308046 ± 11% numa-meminfo.node0.AnonPages.max 96670 ± 32% +60.2% 154892 ± 4% numa-meminfo.node0.Inactive 96637 ± 32% +60.1% 154748 ± 4% numa-meminfo.node0.Inactive(anon) 179842 ± 17% -33.2% 120168 ± 5% numa-meminfo.node1.Inactive 179727 ± 17% -33.1% 120164 ± 5% numa-meminfo.node1.Inactive(anon) 3840 ± 10% -21.5% 3015 ± 11% numa-meminfo.node1.PageTables 714.50 ± 14% -33.0% 478.75 ± 7% numa-vmstat.node0.nr_active_anon 22517 ± 38% +66.5% 37498 ± 5% numa-vmstat.node0.nr_anon_pages 24152 ± 32% +60.1% 38677 ± 4% numa-vmstat.node0.nr_inactive_anon 714.50 ± 14% -33.0% 478.75 ± 7% numa-vmstat.node0.nr_zone_active_anon 24152 ± 32% +60.1% 38677 ± 4% numa-vmstat.node0.nr_zone_inactive_anon 961010 ± 9% +10.8% 1064969 ± 5% numa-vmstat.node0.numa_hit 44968 ± 17% -33.3% 29999 ± 5% numa-vmstat.node1.nr_inactive_anon 225.50 ± 11% -80.2% 44.75 ±173% numa-vmstat.node1.nr_mlock 960.75 ± 10% -21.6% 753.50 ± 11% numa-vmstat.node1.nr_page_table_pages 44968 ± 17% -33.3% 29999 ± 5% numa-vmstat.node1.nr_zone_inactive_anon 14012 ± 13% +30.4% 18268 ± 8% softirqs.CPU16.RCU 30796 ± 31% -59.4% 12509 ± 37% softirqs.CPU16.SCHED 12512 ± 7% +47.9% 18510 ± 6% softirqs.CPU2.RCU 32471 ± 18% -80.2% 6425 ± 20% softirqs.CPU2.SCHED 22103 ± 21% +62.8% 35973 ± 9% softirqs.CPU22.SCHED 25696 ± 55% -65.4% 8900 ± 58% softirqs.CPU34.SCHED 15659 ± 6% +21.5% 19026 ± 6% softirqs.CPU38.RCU 22098 ± 31% -61.2% 8571 ± 44% softirqs.CPU38.SCHED 17434 ± 13% -30.2% 12160 ± 5% softirqs.CPU54.RCU 9976 ± 45% +274.9% 37404 ± 4% softirqs.CPU54.SCHED 39372 ± 3% -26.1% 29080 ± 35% softirqs.CPU56.SCHED 14032 ± 66% +129.6% 32221 ± 15% softirqs.CPU68.SCHED 14559 ± 4% +14.9% 16722 ± 7% softirqs.CPU74.RCU 23526 ± 14% -63.9% 8492 ± 35% softirqs.CPU74.SCHED 14648 ± 6% +20.8% 17695 ± 12% softirqs.CPU79.RCU 22648 ± 34% +57.2% 35591 ± 12% softirqs.CPU90.SCHED 1.10 ± 11% +0.2 1.29 ± 9% perf-profile.calltrace.cycles-pp.perf_event_mmap_output.perf_iterate_sb.perf_event_mmap.mmap_region.do_mmap 2.30 ± 10% +0.3 2.62 ± 9% perf-profile.calltrace.cycles-pp.perf_iterate_sb.perf_event_mmap.mmap_region.do_mmap.vm_mmap_pgoff 5.25 ± 10% +1.2 6.48 ± 9% perf-profile.calltrace.cycles-pp.___might_sleep.unmap_page_range.unmap_vmas.unmap_region.__do_munmap 14.56 ± 10% +2.2 16.79 ± 9% perf-profile.calltrace.cycles-pp.unmap_page_range.unmap_vmas.unmap_region.__do_munmap.__vm_munmap 14.94 ± 10% +2.3 17.21 ± 9% perf-profile.calltrace.cycles-pp.unmap_vmas.unmap_region.__do_munmap.__vm_munmap.__x64_sys_munmap 20.53 ± 10% +2.9 23.44 ± 9% perf-profile.calltrace.cycles-pp.unmap_region.__do_munmap.__vm_munmap.__x64_sys_munmap.do_syscall_64 24.89 ± 10% +3.4 28.27 ± 9% perf-profile.calltrace.cycles-pp.__do_munmap.__vm_munmap.__x64_sys_munmap.do_syscall_64.entry_SYSCALL_64_after_hwframe 25.84 ± 10% +3.5 29.32 ± 9% perf-profile.calltrace.cycles-pp.__vm_munmap.__x64_sys_munmap.do_syscall_64.entry_SYSCALL_64_after_hwframe.__munmap 26.23 ± 10% +3.5 29.76 ± 9% perf-profile.calltrace.cycles-pp.__x64_sys_munmap.do_syscall_64.entry_SYSCALL_64_after_hwframe.__munmap 26.40 ± 10% +3.5 29.94 ± 9% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.__munmap 31.31 ± 10% +4.1 35.36 ± 9% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.__munmap 36.81 ± 10% +4.7 41.52 ± 9% perf-profile.calltrace.cycles-pp.__munmap 0.22 ± 9% +0.0 0.27 ± 8% perf-profile.children.cycles-pp.free_pgtables 1.14 ± 12% +0.2 1.33 ± 9% perf-profile.children.cycles-pp.perf_event_mmap_output 2.34 ± 10% +0.3 2.66 ± 9% perf-profile.children.cycles-pp.perf_iterate_sb 5.72 ± 10% +1.3 7.00 ± 9% perf-profile.children.cycles-pp.___might_sleep 14.58 ± 10% +2.2 16.80 ± 9% perf-profile.children.cycles-pp.unmap_page_range 14.96 ± 10% +2.3 17.23 ± 9% perf-profile.children.cycles-pp.unmap_vmas 20.57 ± 10% +2.9 23.48 ± 9% perf-profile.children.cycles-pp.unmap_region 24.93 ± 10% +3.4 28.31 ± 9% perf-profile.children.cycles-pp.__do_munmap 25.85 ± 10% +3.5 29.33 ± 9% perf-profile.children.cycles-pp.__vm_munmap 26.25 ± 10% +3.5 29.78 ± 9% perf-profile.children.cycles-pp.__x64_sys_munmap 37.22 ± 10% +4.8 41.98 ± 9% perf-profile.children.cycles-pp.__munmap 40.09 ± 10% +5.1 45.21 ± 9% perf-profile.children.cycles-pp.do_syscall_64 0.09 ± 11% +0.0 0.12 ± 10% perf-profile.self.cycles-pp.unlink_anon_vmas 0.46 ± 11% +0.1 0.53 ± 9% perf-profile.self.cycles-pp.entry_SYSCALL_64_after_hwframe 1.09 ± 11% +0.2 1.28 ± 9% perf-profile.self.cycles-pp.perf_event_mmap_output 5.64 ± 10% +1.3 6.91 ± 9% perf-profile.self.cycles-pp.___might_sleep 4.912e+10 -1.2% 4.855e+10 perf-stat.i.branch-instructions 2.135e+08 -1.1% 2.111e+08 perf-stat.i.branch-misses 6788478 ± 11% -10.0% 6109308 ± 2% perf-stat.i.cache-references 0.71 +1.3% 0.72 perf-stat.i.cpi 42307178 -1.1% 41837456 perf-stat.i.dTLB-load-misses 5.014e+10 -1.1% 4.958e+10 perf-stat.i.dTLB-loads 37290 -1.9% 36573 perf-stat.i.dTLB-store-misses 2.265e+10 -1.2% 2.239e+10 perf-stat.i.dTLB-stores 43306433 +4.1% 45095720 perf-stat.i.iTLB-load-misses 2.034e+11 -1.2% 2.011e+11 perf-stat.i.instructions 4751 -5.5% 4489 perf-stat.i.instructions-per-iTLB-miss 1.41 -1.3% 1.40 perf-stat.i.ipc 1172 -1.1% 1159 perf-stat.i.metric.M/sec 7957 ± 2% +9.0% 8672 ± 2% perf-stat.i.node-stores 0.71 +1.3% 0.72 perf-stat.overall.cpi 4698 -5.1% 4460 perf-stat.overall.instructions-per-iTLB-miss 1.41 -1.3% 1.40 perf-stat.overall.ipc 4.895e+10 -1.2% 4.838e+10 perf-stat.ps.branch-instructions 2.128e+08 -1.1% 2.105e+08 perf-stat.ps.branch-misses 6786238 ± 11% -9.8% 6121425 ± 2% perf-stat.ps.cache-references 42160053 -1.1% 41690240 perf-stat.ps.dTLB-load-misses 4.996e+10 -1.1% 4.941e+10 perf-stat.ps.dTLB-loads 37208 -1.8% 36524 perf-stat.ps.dTLB-store-misses 2.258e+10 -1.1% 2.232e+10 perf-stat.ps.dTLB-stores 43155095 +4.1% 44932048 perf-stat.ps.iTLB-load-misses 2.027e+11 -1.2% 2.004e+11 perf-stat.ps.instructions 3046 +0.7% 3069 perf-stat.ps.minor-faults 8008 ± 2% +9.1% 8735 ± 2% perf-stat.ps.node-stores 3047 +0.7% 3069 perf-stat.ps.page-faults 6.123e+13 -1.1% 6.055e+13 perf-stat.total.instructions 1321 ±144% +378.3% 6319 ± 86% interrupts.40:PCI-MSI.67633155-edge.eth0-TxRx-2 833.25 ± 3% -12.8% 726.25 ± 8% interrupts.CPU101.CAL:Function_call_interrupts 113.25 ± 80% -47.7% 59.25 ±120% interrupts.CPU103.TLB:TLB_shootdowns 768.00 ± 5% +34.8% 1035 ± 25% interrupts.CPU15.CAL:Function_call_interrupts 120.00 ± 62% +96.0% 235.25 ± 15% interrupts.CPU16.RES:Rescheduling_interrupts 876.75 ± 8% -14.7% 748.00 ± 5% interrupts.CPU2.CAL:Function_call_interrupts 79.75 ± 58% +259.2% 286.50 ± 4% interrupts.CPU2.RES:Rescheduling_interrupts 6449 ± 30% -61.2% 2505 ± 24% interrupts.CPU21.NMI:Non-maskable_interrupts 6449 ± 30% -61.2% 2505 ± 24% interrupts.CPU21.PMI:Performance_monitoring_interrupts 793.25 ± 6% +11.3% 883.25 ± 3% interrupts.CPU22.CAL:Function_call_interrupts 5472 ± 17% -41.2% 3217 ± 48% interrupts.CPU22.NMI:Non-maskable_interrupts 5472 ± 17% -41.2% 3217 ± 48% interrupts.CPU22.PMI:Performance_monitoring_interrupts 179.25 ± 12% -69.5% 54.75 ± 47% interrupts.CPU22.RES:Rescheduling_interrupts 222.00 ± 23% -45.3% 121.50 ± 27% interrupts.CPU27.RES:Rescheduling_interrupts 6433 ± 13% -41.0% 3794 ± 30% interrupts.CPU29.NMI:Non-maskable_interrupts 6433 ± 13% -41.0% 3794 ± 30% interrupts.CPU29.PMI:Performance_monitoring_interrupts 1321 ±144% +378.3% 6319 ± 86% interrupts.CPU32.40:PCI-MSI.67633155-edge.eth0-TxRx-2 59.75 ±123% -88.3% 7.00 ±124% interrupts.CPU34.TLB:TLB_shootdowns 299.00 ± 11% -25.8% 221.75 ± 15% interrupts.CPU37.RES:Rescheduling_interrupts 5375 ± 29% +34.6% 7232 ± 10% interrupts.CPU38.NMI:Non-maskable_interrupts 5375 ± 29% +34.6% 7232 ± 10% interrupts.CPU38.PMI:Performance_monitoring_interrupts 173.75 ± 24% +61.2% 280.00 ± 6% interrupts.CPU38.RES:Rescheduling_interrupts 746.00 ± 4% +32.5% 988.25 ± 20% interrupts.CPU4.CAL:Function_call_interrupts 7490 ± 6% -29.2% 5304 ± 18% interrupts.CPU51.NMI:Non-maskable_interrupts 7490 ± 6% -29.2% 5304 ± 18% interrupts.CPU51.PMI:Performance_monitoring_interrupts 746.50 ± 4% +63.7% 1222 ± 41% interrupts.CPU54.CAL:Function_call_interrupts 246.50 ± 18% -82.7% 42.75 ± 32% interrupts.CPU54.RES:Rescheduling_interrupts 6665 ± 14% -48.3% 3444 ± 29% interrupts.CPU62.NMI:Non-maskable_interrupts 6665 ± 14% -48.3% 3444 ± 29% interrupts.CPU62.PMI:Performance_monitoring_interrupts 7141 ± 11% -39.8% 4298 ± 35% interrupts.CPU63.NMI:Non-maskable_interrupts 7141 ± 11% -39.8% 4298 ± 35% interrupts.CPU63.PMI:Performance_monitoring_interrupts 746.00 ± 2% +19.3% 889.75 ± 10% interrupts.CPU68.CAL:Function_call_interrupts 224.25 ± 33% -64.1% 80.50 ± 49% interrupts.CPU68.RES:Rescheduling_interrupts 160.00 ± 25% +67.7% 268.25 ± 7% interrupts.CPU74.RES:Rescheduling_interrupts 6663 ± 14% -53.8% 3076 ± 11% interrupts.CPU77.NMI:Non-maskable_interrupts 6663 ± 14% -53.8% 3076 ± 11% interrupts.CPU77.PMI:Performance_monitoring_interrupts 6876 ± 14% -53.1% 3222 ± 48% interrupts.CPU80.NMI:Non-maskable_interrupts 6876 ± 14% -53.1% 3222 ± 48% interrupts.CPU80.PMI:Performance_monitoring_interrupts 6347 ± 14% -44.8% 3504 ± 11% interrupts.CPU83.NMI:Non-maskable_interrupts 6347 ± 14% -44.8% 3504 ± 11% interrupts.CPU83.PMI:Performance_monitoring_interrupts 195.50 ± 49% -59.7% 78.75 ± 67% interrupts.CPU86.RES:Rescheduling_interrupts 7498 ± 8% -43.5% 4236 ± 46% interrupts.CPU90.NMI:Non-maskable_interrupts 7498 ± 8% -43.5% 4236 ± 46% interrupts.CPU90.PMI:Performance_monitoring_interrupts 177.75 ± 27% -64.8% 62.50 ± 48% interrupts.CPU90.RES:Rescheduling_interrupts 1078 ± 10% -23.7% 823.00 ± 12% interrupts.CPU96.CAL:Function_call_interrupts will-it-scale.52.processes 2.14e+07 +----------------------------------------------------------------+ |.+. .+ : + +.+. +.+. .+ +.+. .+ : | 2.13e+07 |-+ + : +.+ + + + + : | | : : + .+.+..+. .+ +. | 2.12e+07 |-+ : : + + +.+.| | +.+ | 2.11e+07 |-+ | | | 2.1e+07 |-+ O O O O O O O | | O O O O O O O O O | 2.09e+07 |-+ O | | O O | 2.08e+07 |-O O O O O | | | 2.07e+07 +----------------------------------------------------------------+ will-it-scale.per_process_ops 412000 +------------------------------------------------------------------+ |.+. .+.+ .+.+.+ +.+. .+.+.+.+. .+.+ | 410000 |-+ + : +.+ + : + +. : | | : : +. : : | 408000 |-+ : : +.+.+..+.+.+ +.+.+.| | +..+ | 406000 |-+ | | | 404000 |-+ O O O O O | | O O O O O O O O O O O | 402000 |-+ O | | O O | 400000 |-O O O O O | | | 398000 +------------------------------------------------------------------+ will-it-scale.workload 2.14e+07 +----------------------------------------------------------------+ |.+. .+ : + +.+. +.+. .+ +.+. .+ : | 2.13e+07 |-+ + : +.+ + + + + : | | : : + .+.+..+. .+ +. | 2.12e+07 |-+ : : + + +.+.| | +.+ | 2.11e+07 |-+ | | | 2.1e+07 |-+ O O O O O O O | | O O O O O O O O O | 2.09e+07 |-+ O | | O O | 2.08e+07 |-O O O O O | | | 2.07e+07 +----------------------------------------------------------------+ [*] bisect-good sample [O] bisect-bad sample *************************************************************************************************** lkp-cfl-e1: 16 threads Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz with 32G memory ========================================================================================= compiler/cpufreq_governor/kconfig/nr_task/rootfs/runtime/tbox_group/test/testcase/ucode: gcc-9/performance/x86_64-rhel-8.3/100%/debian-10.4-x86_64-20200603.cgz/300s/lkp-cfl-e1/new_dbase/reaim/0xde commit: a51d9615ff ("init: lto: fix PREL32 relocations") dc83615370 ("PCI: Fix PREL32 relocations for LTO") a51d9615ffb5559c dc83615370e7ebcb181a21a8ad1 ---------------- --------------------------- fail:runs %reproduction fail:runs | | | 1:4 -1% 1:4 perf-profile.children.cycles-pp.error_entry 0:4 -1% 0:4 perf-profile.self.cycles-pp.error_entry %stddev %change %stddev \ | \ 0.51 +1.0% 0.52 reaim.child_systime 243564 -1.3% 240371 reaim.jobs_per_min 15222 -1.3% 15023 reaim.jobs_per_min_child 0.41 +1.4% 0.41 reaim.parent_time 2.80 +9.4% 3.06 ± 3% reaim.std_dev_percent 0.01 +7.0% 0.01 ± 3% reaim.std_dev_time 2243615 -1.3% 2215564 proc-vmstat.pgreuse 0.64 +0.1 0.77 ± 7% mpstat.cpu.all.irq% 0.05 ± 2% +0.0 0.07 ± 5% mpstat.cpu.all.soft% 2499 ± 3% +7.7% 2690 ± 3% vmstat.system.cs 23157 ± 19% +39.1% 32211 vmstat.system.in 171129 ± 32% -42.0% 99233 ± 50% cpuidle.C10.time 60165674 ± 24% +520.9% 3.736e+08 ± 56% cpuidle.C1E.time 427407 ± 20% +157.8% 1101739 ± 44% cpuidle.C1E.usage 65948995 ± 60% +2121.4% 1.465e+09 ± 17% cpuidle.C3.time 844032 ± 79% +755.1% 7216904 ± 17% cpuidle.C3.usage 3.824e+09 ± 4% -45.1% 2.1e+09 ± 18% cpuidle.C6.time 1141333 ±141% -88.1% 135266 ± 4% cpuidle.POLL.time 98165 ± 3% -8.8% 89487 ± 3% sched_debug.cfs_rq:/.load.avg 2.33 ± 37% +301.8% 9.38 ± 39% sched_debug.cfs_rq:/.load_avg.min 170.62 ± 70% +60.0% 273.07 ± 24% sched_debug.cfs_rq:/.removed.load_avg.max 226.69 ± 10% +54.1% 349.30 ± 21% sched_debug.cfs_rq:/.runnable_avg.avg 218.71 ± 11% +55.3% 339.67 ± 22% sched_debug.cfs_rq:/.util_avg.avg 34.08 ± 43% +234.2% 113.92 ± 41% sched_debug.cfs_rq:/.util_est_enqueued.avg 0.26 ± 43% +100.8% 0.52 ± 19% sched_debug.cpu.clock.stddev 18.46 ± 33% -45.9% 9.98 ± 25% sched_debug.cpu.nr_uninterruptible.max -17.42 -51.5% -8.45 sched_debug.cpu.nr_uninterruptible.min 9.71 ± 40% -45.9% 5.25 ± 11% sched_debug.cpu.nr_uninterruptible.stddev 916.75 ±127% +709.3% 7419 ± 96% interrupts.135:IR-PCI-MSI.2097156-edge.eth1-TxRx-3 435045 ± 19% +38.7% 603449 interrupts.CPU0.LOC:Local_timer_interrupts 435308 ± 19% +38.5% 603037 interrupts.CPU1.LOC:Local_timer_interrupts 434929 ± 19% +38.8% 603749 interrupts.CPU10.LOC:Local_timer_interrupts 434479 ± 19% +39.0% 603791 interrupts.CPU11.LOC:Local_timer_interrupts 434621 ± 19% +39.0% 604229 interrupts.CPU12.LOC:Local_timer_interrupts 434722 ± 19% +39.1% 604482 interrupts.CPU13.LOC:Local_timer_interrupts 434385 ± 19% +39.2% 604524 interrupts.CPU14.LOC:Local_timer_interrupts 434173 ± 19% +39.3% 604697 interrupts.CPU15.LOC:Local_timer_interrupts 434690 ± 19% +38.9% 603755 interrupts.CPU2.LOC:Local_timer_interrupts 433816 ± 19% +39.3% 604290 interrupts.CPU3.LOC:Local_timer_interrupts 916.75 ±127% +709.3% 7419 ± 96% interrupts.CPU4.135:IR-PCI-MSI.2097156-edge.eth1-TxRx-3 434195 ± 19% +39.0% 603740 interrupts.CPU4.LOC:Local_timer_interrupts 434195 ± 19% +39.1% 604095 interrupts.CPU5.LOC:Local_timer_interrupts 433638 ± 19% +39.1% 603395 interrupts.CPU6.LOC:Local_timer_interrupts 433862 ± 19% +39.1% 603548 interrupts.CPU7.LOC:Local_timer_interrupts 436082 ± 19% +38.6% 604258 interrupts.CPU8.LOC:Local_timer_interrupts 435856 ± 19% +38.5% 603861 interrupts.CPU9.LOC:Local_timer_interrupts 6954003 ± 19% +39.0% 9662907 interrupts.LOC:Local_timer_interrupts 48914 ± 13% +50.8% 73740 ± 16% softirqs.CPU0.RCU 48512 ± 15% +45.1% 70381 ± 10% softirqs.CPU1.RCU 48402 ± 14% +53.5% 74319 ± 18% softirqs.CPU10.RCU 47968 ± 16% +54.5% 74124 ± 20% softirqs.CPU11.RCU 48860 ± 14% +55.2% 75812 ± 23% softirqs.CPU12.RCU 48603 ± 14% +54.9% 75300 ± 23% softirqs.CPU13.RCU 47769 ± 14% +55.5% 74284 ± 19% softirqs.CPU14.RCU 47544 ± 9% +58.4% 75317 ± 20% softirqs.CPU15.RCU 50185 ± 16% +48.8% 74685 ± 17% softirqs.CPU2.RCU 48269 ± 17% +58.9% 76692 ± 20% softirqs.CPU3.RCU 48160 ± 14% +61.5% 77778 ± 20% softirqs.CPU4.RCU 48987 ± 14% +55.2% 76010 ± 19% softirqs.CPU5.RCU 47547 ± 15% +55.6% 73969 ± 19% softirqs.CPU6.RCU 48942 ± 8% +58.0% 77312 ± 17% softirqs.CPU7.RCU 47663 ± 15% +50.8% 71899 ± 17% softirqs.CPU8.RCU 47067 ± 15% +44.9% 68194 ± 10% softirqs.CPU9.RCU 10264 ± 63% +177.9% 28525 ± 53% softirqs.NET_RX 773402 ± 14% +53.8% 1189824 ± 18% softirqs.RCU 30133 ± 10% +31.1% 39495 ± 10% softirqs.TIMER 4744705 +4.9% 4977897 perf-stat.i.cache-misses 2501 ± 3% +7.7% 2692 ± 3% perf-stat.i.context-switches 1.187e+10 +1.0% 1.198e+10 perf-stat.i.cpu-cycles 253.78 -2.7% 246.90 perf-stat.i.cpu-migrations 2222 -5.4% 2102 perf-stat.i.cycles-between-cache-misses 0.24 ± 2% +0.1 0.33 ± 9% perf-stat.i.dTLB-load-miss-rate% 556067 ± 4% +45.5% 809110 ± 8% perf-stat.i.dTLB-load-misses 0.08 ± 2% +0.1 0.13 ± 11% perf-stat.i.dTLB-store-miss-rate% 571183 +8.9% 621811 ± 2% perf-stat.i.dTLB-store-misses 737933 ± 3% +24.9% 921496 perf-stat.i.iTLB-load-misses 10849 ± 2% -15.3% 9194 perf-stat.i.instructions-per-iTLB-miss 0.74 +1.0% 0.75 perf-stat.i.metric.GHz 0.39 ± 2% +4.5% 0.41 perf-stat.i.metric.K/sec 0.69 +0.0 0.72 perf-stat.overall.cache-miss-rate% 1.00 +1.0% 1.01 perf-stat.overall.cpi 2499 -3.8% 2405 perf-stat.overall.cycles-between-cache-misses 0.04 ± 4% +0.0 0.06 ± 8% perf-stat.overall.dTLB-load-miss-rate% 0.07 +0.0 0.07 ± 2% perf-stat.overall.dTLB-store-miss-rate% 65.65 +4.1 69.75 perf-stat.overall.iTLB-load-miss-rate% 16036 ± 3% -20.0% 12827 perf-stat.overall.instructions-per-iTLB-miss 1.00 -1.0% 0.99 perf-stat.overall.ipc 4729387 +4.9% 4961480 perf-stat.ps.cache-misses 2492 ± 3% +7.7% 2683 ± 3% perf-stat.ps.context-switches 1.182e+10 +0.9% 1.193e+10 perf-stat.ps.cpu-cycles 252.92 -2.7% 246.03 perf-stat.ps.cpu-migrations 554253 ± 4% +45.5% 806429 ± 8% perf-stat.ps.dTLB-load-misses 569176 +8.9% 619587 ± 2% perf-stat.ps.dTLB-store-misses 735389 ± 3% +24.9% 918286 perf-stat.ps.iTLB-load-misses 0.03 ± 10% +61.9% 0.06 ± 5% perf-sched.sch_delay.avg.ms.devkmsg_read.vfs_read.ksys_read.do_syscall_64 0.01 ± 17% +37.5% 0.02 ± 9% perf-sched.sch_delay.avg.ms.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep.do_syscall_64 0.04 ± 2% +46.1% 0.06 ± 10% perf-sched.sch_delay.avg.ms.do_syslog.part.0.kmsg_read.vfs_read 0.05 ±131% -95.0% 0.00 ± 60% perf-sched.sch_delay.avg.ms.preempt_schedule_common._cond_resched.down_read.iterate_supers.ksys_sync 0.01 ±127% +4793.3% 0.37 ±159% perf-sched.sch_delay.avg.ms.preempt_schedule_common._cond_resched.down_read.walk_component.link_path_walk 0.04 ±173% +1138.3% 0.48 ±116% perf-sched.sch_delay.avg.ms.preempt_schedule_common._cond_resched.exit_mmap.mmput.do_exit 0.10 ±167% +297.8% 0.41 ±102% perf-sched.sch_delay.avg.ms.preempt_schedule_common._cond_resched.kmem_cache_alloc.vm_area_dup.__split_vma 0.04 ± 4% +34.3% 0.06 ± 6% perf-sched.sch_delay.avg.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait 0.00 ± 63% +18775.0% 0.75 ±109% perf-sched.sch_delay.avg.ms.schedule_timeout.wait_for_completion.__flush_work.lru_add_drain_all 0.09 +20.2% 0.11 ± 12% perf-sched.sch_delay.max.ms.devkmsg_read.vfs_read.ksys_read.do_syscall_64 0.03 +41.4% 0.05 ± 19% perf-sched.sch_delay.max.ms.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep.do_syscall_64 0.09 ±139% -95.5% 0.00 ± 85% perf-sched.sch_delay.max.ms.preempt_schedule_common._cond_resched.down_read.iterate_supers.ksys_sync 0.02 ±154% +3776.7% 0.71 ±165% perf-sched.sch_delay.max.ms.preempt_schedule_common._cond_resched.down_read.walk_component.link_path_walk 0.11 ±161% +1121.1% 1.30 ± 94% perf-sched.sch_delay.max.ms.preempt_schedule_common._cond_resched.kmem_cache_alloc.vm_area_dup.__split_vma 0.09 ± 5% +19.8% 0.10 ± 13% perf-sched.sch_delay.max.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait 0.00 ± 63% +18775.0% 0.75 ±109% perf-sched.sch_delay.max.ms.schedule_timeout.wait_for_completion.__flush_work.lru_add_drain_all 126.40 ± 5% -14.6% 107.96 ± 2% perf-sched.wait_and_delay.avg.ms.devkmsg_read.vfs_read.ksys_read.do_syscall_64 125.99 ± 5% -14.6% 107.62 ± 2% perf-sched.wait_and_delay.avg.ms.do_syslog.part.0.kmsg_read.vfs_read 12.97 ± 10% +24.8% 16.18 ± 6% perf-sched.wait_and_delay.avg.ms.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64 122.43 ± 6% -13.7% 105.63 ± 2% perf-sched.wait_and_delay.avg.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait 1598 -11.5% 1414 ± 2% perf-sched.wait_and_delay.avg.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_select 111.50 ± 8% +19.3% 133.00 ± 12% perf-sched.wait_and_delay.count.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_reschedule_ipi.[unknown] 126.37 ± 5% -14.6% 107.90 ± 2% perf-sched.wait_time.avg.ms.devkmsg_read.vfs_read.ksys_read.do_syscall_64 125.95 ± 5% -14.6% 107.56 ± 2% perf-sched.wait_time.avg.ms.do_syslog.part.0.kmsg_read.vfs_read 12.88 ± 10% +25.0% 16.10 ± 6% perf-sched.wait_time.avg.ms.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64 0.47 ± 9% +147.3% 1.17 ± 96% perf-sched.wait_time.avg.ms.preempt_schedule_common._cond_resched.__alloc_pages_nodemask.alloc_pages_vma.do_anonymous_page 0.24 ± 32% +5539.0% 13.49 ± 96% perf-sched.wait_time.avg.ms.preempt_schedule_common._cond_resched.do_user_addr_fault.exc_page_fault.asm_exc_page_fault 122.39 ± 6% -13.7% 105.57 ± 2% perf-sched.wait_time.avg.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait 1598 -11.5% 1414 ± 2% perf-sched.wait_time.avg.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_select 2.66 ± 95% +18768.8% 501.01 ± 99% perf-sched.wait_time.max.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_exc_page_fault.[unknown] 1.95 ± 47% +25754.4% 504.35 ± 98% perf-sched.wait_time.max.ms.preempt_schedule_common._cond_resched.do_user_addr_fault.exc_page_fault.asm_exc_page_fault 0.28 ±165% -97.4% 0.01 ± 5% perf-sched.wait_time.max.ms.preempt_schedule_common._cond_resched.dput.__fput.task_work_run 1.91 ± 17% -0.8 1.08 ± 24% perf-profile.calltrace.cycles-pp.devkmsg_write.cold.new_sync_write.vfs_write.ksys_write.do_syscall_64 1.91 ± 17% -0.8 1.08 ± 24% perf-profile.calltrace.cycles-pp.devkmsg_emit.devkmsg_write.cold.new_sync_write.vfs_write.ksys_write 1.91 ± 17% -0.8 1.08 ± 24% perf-profile.calltrace.cycles-pp.vprintk_emit.devkmsg_emit.devkmsg_write.cold.new_sync_write.vfs_write 1.69 ± 14% -0.8 0.89 ± 15% perf-profile.calltrace.cycles-pp.console_unlock.vprintk_emit.devkmsg_emit.devkmsg_write.cold.new_sync_write 2.24 ± 14% -0.8 1.45 ± 16% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.write 2.23 ± 14% -0.8 1.45 ± 16% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.write 2.18 ± 14% -0.8 1.40 ± 17% perf-profile.calltrace.cycles-pp.new_sync_write.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe 2.22 ± 14% -0.8 1.44 ± 17% perf-profile.calltrace.cycles-pp.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.write 2.25 ± 14% -0.8 1.48 ± 16% perf-profile.calltrace.cycles-pp.write 2.20 ± 14% -0.8 1.43 ± 17% perf-profile.calltrace.cycles-pp.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.write 1.43 ± 18% -0.7 0.73 ± 12% perf-profile.calltrace.cycles-pp.serial8250_console_write.console_unlock.vprintk_emit.devkmsg_emit.devkmsg_write.cold 1.33 ± 20% -0.7 0.67 ± 13% perf-profile.calltrace.cycles-pp.uart_console_write.serial8250_console_write.console_unlock.vprintk_emit.devkmsg_emit 1.07 ± 21% -0.6 0.43 ± 57% perf-profile.calltrace.cycles-pp.wait_for_xmitr.serial8250_console_putchar.uart_console_write.serial8250_console_write.console_unlock 1.07 ± 21% -0.6 0.43 ± 57% perf-profile.calltrace.cycles-pp.serial8250_console_putchar.uart_console_write.serial8250_console_write.console_unlock.vprintk_emit 1.01 ± 22% -0.6 0.42 ± 57% perf-profile.calltrace.cycles-pp.io_serial_in.wait_for_xmitr.serial8250_console_putchar.uart_console_write.serial8250_console_write 1.25 ± 24% -0.4 0.85 ± 9% perf-profile.calltrace.cycles-pp.asm_exc_page_fault 1.97 ± 9% -0.3 1.69 ± 9% perf-profile.calltrace.cycles-pp.kthread.ret_from_fork 1.98 ± 9% -0.3 1.70 ± 9% perf-profile.calltrace.cycles-pp.ret_from_fork 1.98 ± 9% -0.3 1.72 ± 2% perf-profile.calltrace.cycles-pp.__handle_mm_fault.handle_mm_fault.do_user_addr_fault.exc_page_fault.asm_exc_page_fault 1.77 ± 9% -0.2 1.52 ± 8% perf-profile.calltrace.cycles-pp.memcpy_toio.drm_fb_helper_damage_work.process_one_work.worker_thread.kthread 1.83 ± 9% -0.2 1.59 ± 9% perf-profile.calltrace.cycles-pp.drm_fb_helper_damage_work.process_one_work.worker_thread.kthread.ret_from_fork 1.84 ± 9% -0.2 1.60 ± 9% perf-profile.calltrace.cycles-pp.worker_thread.kthread.ret_from_fork 1.84 ± 9% -0.2 1.60 ± 9% perf-profile.calltrace.cycles-pp.process_one_work.worker_thread.kthread.ret_from_fork 0.74 ± 13% -0.1 0.60 ± 13% perf-profile.calltrace.cycles-pp.__hrtimer_run_queues.hrtimer_interrupt.__sysvec_apic_timer_interrupt.asm_call_sysvec_on_stack.sysvec_apic_timer_interrupt 6.58 ± 12% +1.8 8.39 ± 4% perf-profile.calltrace.cycles-pp.intel_idle.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry 9.51 ± 6% -1.5 7.99 ± 2% perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe 9.31 ± 6% -1.5 7.82 ± 3% perf-profile.children.cycles-pp.do_syscall_64 2.28 ± 12% -0.8 1.44 ± 17% perf-profile.children.cycles-pp.ksys_write 2.23 ± 12% -0.8 1.40 ± 17% perf-profile.children.cycles-pp.new_sync_write 2.27 ± 12% -0.8 1.44 ± 17% perf-profile.children.cycles-pp.vfs_write 1.91 ± 17% -0.8 1.08 ± 24% perf-profile.children.cycles-pp.devkmsg_write.cold 1.91 ± 17% -0.8 1.08 ± 24% perf-profile.children.cycles-pp.devkmsg_emit 2.14 ± 13% -0.8 1.33 ± 28% perf-profile.children.cycles-pp.vprintk_emit 2.26 ± 14% -0.8 1.49 ± 16% perf-profile.children.cycles-pp.write 1.90 ± 10% -0.8 1.14 ± 23% perf-profile.children.cycles-pp.console_unlock 1.64 ± 14% -0.7 0.97 ± 23% perf-profile.children.cycles-pp.serial8250_console_write 1.53 ± 15% -0.6 0.89 ± 24% perf-profile.children.cycles-pp.uart_console_write 1.33 ± 14% -0.5 0.80 ± 24% perf-profile.children.cycles-pp.wait_for_xmitr 1.26 ± 15% -0.5 0.77 ± 23% perf-profile.children.cycles-pp.io_serial_in 1.23 ± 16% -0.5 0.74 ± 26% perf-profile.children.cycles-pp.serial8250_console_putchar 2.39 ± 9% -0.3 2.08 ± 2% perf-profile.children.cycles-pp.__handle_mm_fault 3.69 ± 7% -0.3 3.40 ± 3% perf-profile.children.cycles-pp.asm_exc_page_fault 1.99 ± 9% -0.3 1.71 ± 9% perf-profile.children.cycles-pp.ret_from_fork 1.97 ± 9% -0.3 1.69 ± 9% perf-profile.children.cycles-pp.kthread 1.81 ± 9% -0.3 1.56 ± 9% perf-profile.children.cycles-pp.memcpy_toio 1.83 ± 9% -0.2 1.59 ± 9% perf-profile.children.cycles-pp.drm_fb_helper_damage_work 1.84 ± 9% -0.2 1.60 ± 9% perf-profile.children.cycles-pp.worker_thread 1.84 ± 9% -0.2 1.60 ± 9% perf-profile.children.cycles-pp.process_one_work 0.31 ± 20% -0.2 0.15 ± 21% perf-profile.children.cycles-pp.io_serial_out 1.17 ± 11% -0.2 1.02 ± 5% perf-profile.children.cycles-pp.bprm_execve 0.55 ± 16% -0.1 0.41 ± 17% perf-profile.children.cycles-pp.tick_sched_handle 0.53 ± 11% -0.1 0.43 ± 4% perf-profile.children.cycles-pp.filemap_map_pages 0.61 ± 11% -0.1 0.51 ± 4% perf-profile.children.cycles-pp.do_fault 0.29 ± 14% -0.1 0.23 ± 12% perf-profile.children.cycles-pp.scheduler_tick 0.09 ± 14% -0.1 0.03 ±100% perf-profile.children.cycles-pp.vma_interval_tree_insert 0.38 ± 6% -0.1 0.32 ± 3% perf-profile.children.cycles-pp.ksys_mmap_pgoff 0.11 ± 9% -0.1 0.06 ± 61% perf-profile.children.cycles-pp.vmacache_find 0.20 ± 15% -0.1 0.14 ± 7% perf-profile.children.cycles-pp.wp_page_copy 0.18 ± 7% -0.0 0.13 ± 13% perf-profile.children.cycles-pp.alloc_set_pte 0.15 ± 26% -0.0 0.11 ± 23% perf-profile.children.cycles-pp.exit_to_user_mode_prepare 0.10 ± 14% -0.0 0.06 ± 14% perf-profile.children.cycles-pp.vma_link 0.15 ± 10% -0.0 0.11 ± 12% perf-profile.children.cycles-pp.__mmap 0.17 ± 11% -0.0 0.13 ± 6% perf-profile.children.cycles-pp.__x64_sys_mprotect 0.17 ± 11% -0.0 0.13 ± 6% perf-profile.children.cycles-pp.do_mprotect_pkey 0.16 ± 8% -0.0 0.12 ± 12% perf-profile.children.cycles-pp.mprotect_fixup 0.07 ± 10% -0.0 0.04 ± 57% perf-profile.children.cycles-pp.arch_scale_freq_tick 0.09 ± 23% -0.0 0.06 ± 11% perf-profile.children.cycles-pp.irqtime_account_irq 0.09 ± 9% -0.0 0.06 ± 13% perf-profile.children.cycles-pp.__do_sys_wait4 0.12 ± 8% -0.0 0.10 perf-profile.children.cycles-pp.compar2 0.09 ± 20% -0.0 0.07 ± 23% perf-profile.children.cycles-pp.copy_user_enhanced_fast_string 0.09 ± 10% -0.0 0.06 ± 13% perf-profile.children.cycles-pp.kernel_wait4 0.08 ± 8% -0.0 0.06 ± 11% perf-profile.children.cycles-pp.do_wait 0.09 ± 9% -0.0 0.07 ± 12% perf-profile.children.cycles-pp.__list_del_entry_valid 2.47 ± 6% +0.2 2.67 perf-profile.children.cycles-pp.page_test 6.90 ± 12% +1.7 8.59 ± 4% perf-profile.children.cycles-pp.intel_idle 1.26 ± 15% -0.5 0.77 ± 23% perf-profile.self.cycles-pp.io_serial_in 1.81 ± 9% -0.3 1.55 ± 9% perf-profile.self.cycles-pp.memcpy_toio 0.31 ± 20% -0.2 0.15 ± 21% perf-profile.self.cycles-pp.io_serial_out 0.09 ± 14% -0.1 0.03 ±100% perf-profile.self.cycles-pp.vma_interval_tree_insert 0.34 ± 8% -0.1 0.29 ± 11% perf-profile.self.cycles-pp.zap_pte_range 0.11 ± 6% -0.0 0.06 ± 61% perf-profile.self.cycles-pp.vmacache_find 0.21 ± 18% -0.0 0.17 ± 17% perf-profile.self.cycles-pp.___might_sleep 0.07 ± 10% -0.0 0.04 ± 57% perf-profile.self.cycles-pp.arch_scale_freq_tick 0.12 ± 6% -0.0 0.10 perf-profile.self.cycles-pp.compar2 0.08 ± 5% -0.0 0.07 ± 13% perf-profile.self.cycles-pp.__list_del_entry_valid 0.15 ± 9% +0.1 0.21 ± 14% perf-profile.self.cycles-pp.cfree 6.90 ± 12% +1.7 8.59 ± 4% perf-profile.self.cycles-pp.intel_idle Disclaimer: Results have been estimated based on internal Intel analysis and are provided for informational purposes only. Any difference in system hardware or software design or configuration may affect actual performance. Thanks, Oliver Sang View attachment "config-5.11.0-rc2-00009-gdc83615370e7" of type "text/plain" (172460 bytes) View attachment "job-script" of type "text/plain" (7540 bytes) View attachment "job.yaml" of type "text/plain" (5107 bytes) View attachment "reproduce" of type "text/plain" (337 bytes)
Powered by blists - more mailing lists