[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20211012085932.GD25752@xsang-OptiPlex-9020>
Date: Tue, 12 Oct 2021 16:59:32 +0800
From: kernel test robot <oliver.sang@...el.com>
To: Huang Ying <ying.huang@...el.com>
Cc: 0day robot <lkp@...el.com>,
Dave Hansen <dave.hansen@...ux.intel.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Michal Hocko <mhocko@...e.com>,
Rik van Riel <riel@...riel.com>,
Mel Gorman <mgorman@...e.de>,
Peter Zijlstra <peterz@...radead.org>,
Yang Shi <shy828301@...il.com>, Zi Yan <ziy@...dia.com>,
Wei Xu <weixugc@...gle.com>, osalvador <osalvador@...e.de>,
Shakeel Butt <shakeelb@...gle.com>,
LKML <linux-kernel@...r.kernel.org>, lkp@...ts.01.org,
ying.huang@...el.com, feng.tang@...el.com,
zhengjun.xing@...ux.intel.com, linux-mm@...ck.org
Subject: [memory tiering] 76ff9ff49a: vm-scalability.median 5.0% improvement
Greeting,
FYI, we noticed a 5.0% improvement of vm-scalability.median due to commit:
commit: 76ff9ff49a478cf8936f020e7cbd052babc86245 ("[PATCH -V9 3/6] memory tiering: skip to scan fast memory")
url: https://github.com/0day-ci/linux/commits/Huang-Ying/NUMA-balancing-optimize-memory-placement-for-memory-tiering-system/20211008-164204
base: https://git.kernel.org/cgit/linux/kernel/git/tip/tip.git 769fdf83df57b373660343ef4270b3ada91ef434
in testcase: vm-scalability
on test machine: 192 threads 4 sockets Intel(R) Xeon(R) Platinum 9242 CPU @ 2.30GHz with 192G memory
with following parameters:
runtime: 300s
size: 512G
test: anon-w-rand
cpufreq_governor: performance
ucode: 0x5003006
test-description: The motivation behind this suite is to exercise functions and regions of the mm/ of the Linux kernel which are of interest to us.
test-url: https://git.kernel.org/cgit/linux/kernel/git/wfg/vm-scalability.git/
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
sudo bin/lkp install job.yaml # job file is attached in this email
bin/lkp split-job --compatible job.yaml # generate the yaml file for lkp run
sudo bin/lkp run generated-yaml-file
# if come across any failure that blocks the test,
# please remove ~/.lkp and /lkp dir to run from a clean state.
=========================================================================================
compiler/cpufreq_governor/kconfig/rootfs/runtime/size/tbox_group/test/testcase/ucode:
gcc-9/performance/x86_64-rhel-8.3/debian-10.4-x86_64-20200603.cgz/300s/512G/lkp-csl-2ap4/anon-w-rand/vm-scalability/0x5003006
commit:
9fbea5e92b ("NUMA balancing: optimize page placement for memory tiering system")
76ff9ff49a ("memory tiering: skip to scan fast memory")
9fbea5e92b8daae4 76ff9ff49a478cf8936f020e7cb
---------------- ---------------------------
%stddev %change %stddev
\ | \
0.00 ± 5% -10.1% 0.00 ± 3% vm-scalability.free_time
43870 +5.0% 46077 vm-scalability.median
0.53 ± 24% +1.9 2.40 ± 20% vm-scalability.median_stddev%
296.42 +3.3% 306.08 vm-scalability.time.elapsed_time
296.42 +3.3% 306.08 vm-scalability.time.elapsed_time.max
296908 ± 2% -20.0% 237565 ± 5% vm-scalability.time.involuntary_context_switches
8965353 -68.2% 2846758 ± 6% vm-scalability.time.minor_page_faults
17482 -14.3% 14989 vm-scalability.time.percent_of_cpu_this_job_got
4484 -19.8% 3594 ± 2% vm-scalability.time.system_time
47330 -10.7% 42287 ± 2% vm-scalability.time.user_time
12597 -12.5% 11028 ± 3% vm-scalability.time.voluntary_context_switches
2.336e+09 -11.9% 2.058e+09 ± 2% vm-scalability.workload
11927 ± 6% +69.4% 20201 ± 4% uptime.idle
45.31 ± 2% +4.0% 47.13 boot-time.boot
7668 ± 2% +4.5% 8012 boot-time.idle
3.992e+09 ± 17% +197.6% 1.188e+10 ± 8% cpuidle..time
8277676 ± 17% +165.0% 21937927 ± 13% cpuidle..usage
1437923 ± 12% -26.0% 1063848 ± 4% numa-numastat.node1.local_node
1484898 ± 10% -22.5% 1150364 ± 4% numa-numastat.node1.numa_hit
47588 ± 61% +82.4% 86778 numa-numastat.node1.other_node
6.97 ± 16% +13.2 20.18 ± 8% mpstat.cpu.all.idle%
1.90 -0.2 1.72 mpstat.cpu.all.irq%
7.97 -1.8 6.19 mpstat.cpu.all.sys%
83.08 -11.3 71.83 ± 2% mpstat.cpu.all.usr%
1444 -14.4% 1236 ± 4% turbostat.Avg_MHz
93.50 -13.4 80.13 ± 4% turbostat.Busy%
783206 ±129% +413.7% 4023234 ± 60% turbostat.C6
6.04 ± 21% +209.3% 18.67 ± 21% turbostat.CPU%c1
239.62 -6.3% 224.43 ± 2% turbostat.PkgWatt
7.00 ± 18% +190.5% 20.33 ± 7% vmstat.cpu.id
82.33 -14.0% 70.83 vmstat.cpu.us
1.048e+08 +12.1% 1.175e+08 vmstat.memory.free
179.17 -14.4% 153.33 ± 2% vmstat.procs.r
4031 -8.1% 3704 ± 3% vmstat.system.cs
22475618 ± 3% -22.1% 17517115 ± 12% numa-meminfo.node1.AnonHugePages
22587494 ± 3% -22.1% 17584894 ± 12% numa-meminfo.node1.AnonPages
22646122 ± 4% -22.3% 17589979 ± 12% numa-meminfo.node1.Inactive
22646122 ± 4% -22.3% 17589970 ± 12% numa-meminfo.node1.Inactive(anon)
25905734 ± 4% +18.7% 30756229 ± 6% numa-meminfo.node1.MemFree
23629681 ± 4% -20.5% 18779185 ± 10% numa-meminfo.node1.MemUsed
49340 ± 17% -29.9% 34586 ± 15% numa-meminfo.node1.PageTables
36467 ±220% +2391.0% 908402 ±112% numa-meminfo.node3.Unevictable
87524616 -14.7% 74626124 meminfo.AnonHugePages
87952806 -14.7% 74999102 meminfo.AnonPages
92866025 -14.8% 79091942 ± 2% meminfo.Committed_AS
88140856 -14.7% 75179098 meminfo.Inactive
88140803 -14.7% 75179046 meminfo.Inactive(anon)
1.04e+08 +12.5% 1.171e+08 meminfo.MemAvailable
1.047e+08 +12.5% 1.177e+08 meminfo.MemFree
93039748 -14.0% 79986317 meminfo.Memused
179104 -14.5% 153155 meminfo.PageTables
5629708 ± 3% -22.0% 4392349 ± 12% numa-vmstat.node1.nr_anon_pages
10940 ± 3% -21.9% 8545 ± 12% numa-vmstat.node1.nr_anon_transparent_hugepages
6493715 ± 3% +18.5% 7693620 ± 6% numa-vmstat.node1.nr_free_pages
5644782 ± 3% -22.2% 4393439 ± 12% numa-vmstat.node1.nr_inactive_anon
201.00 ± 93% -100.0% 0.00 numa-vmstat.node1.nr_isolated_anon
12322 ± 17% -29.8% 8645 ± 15% numa-vmstat.node1.nr_page_table_pages
5644427 ± 3% -22.2% 4393052 ± 12% numa-vmstat.node1.nr_zone_inactive_anon
9116 ±220% +2391.1% 227100 ±112% numa-vmstat.node3.nr_unevictable
9116 ±220% +2391.1% 227100 ±112% numa-vmstat.node3.nr_zone_unevictable
21996061 -14.8% 18741659 ± 2% proc-vmstat.nr_anon_pages
42751 -14.8% 36422 ± 2% proc-vmstat.nr_anon_transparent_hugepages
2594160 +12.6% 2921578 proc-vmstat.nr_dirty_background_threshold
5194664 +12.6% 5850300 proc-vmstat.nr_dirty_threshold
26161778 +12.5% 29440763 proc-vmstat.nr_free_pages
22043312 -14.8% 18787084 ± 2% proc-vmstat.nr_inactive_anon
359.83 ± 17% -100.0% 0.17 ±223% proc-vmstat.nr_isolated_anon
34447 -2.6% 33554 proc-vmstat.nr_kernel_stack
44738 -14.5% 38259 ± 2% proc-vmstat.nr_page_table_pages
22043308 -14.8% 18787082 ± 2% proc-vmstat.nr_zone_inactive_anon
5746797 -100.0% 0.00 proc-vmstat.numa_hint_faults
5649618 -100.0% 0.00 proc-vmstat.numa_hint_faults_local
5698412 ± 2% -10.4% 5106741 ± 4% proc-vmstat.numa_hit
5622112 -100.0% 0.00 proc-vmstat.numa_huge_pte_updates
5439419 ± 2% -10.9% 4847613 ± 4% proc-vmstat.numa_local
23521423 ± 6% -100.0% 0.00 proc-vmstat.numa_pages_migrated
2.879e+09 -100.0% 0.00 proc-vmstat.numa_pte_updates
5705992 ± 2% -10.3% 5116075 ± 4% proc-vmstat.pgalloc_normal
10333677 -59.3% 4210415 ± 4% proc-vmstat.pgfault
5569170 ± 3% -10.1% 5005163 ± 4% proc-vmstat.pgfree
23521423 ± 6% -100.0% 0.00 proc-vmstat.pgmigrate_success
1009758 -11.9% 889598 ± 2% proc-vmstat.thp_fault_alloc
45861 ± 6% -100.0% 0.00 proc-vmstat.thp_migration_success
0.27 ± 81% -88.7% 0.03 ± 65% perf-sched.sch_delay.avg.ms.smpboot_thread_fn.kthread.ret_from_fork
766.06 ± 85% -99.4% 4.40 ± 98% perf-sched.sch_delay.max.ms.exit_to_user_mode_prepare.syscall_exit_to_user_mode.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.06 ± 15% +61.1% 0.09 ± 27% perf-sched.sch_delay.max.ms.rcu_gp_kthread.kthread.ret_from_fork
496.79 ± 86% -94.2% 29.00 ±214% perf-sched.sch_delay.max.ms.smpboot_thread_fn.kthread.ret_from_fork
37.07 ±174% -91.7% 3.08 ± 83% perf-sched.sch_delay.max.ms.syslog_print.do_syslog.part.0.kmsg_read
96.02 ± 24% +56.7% 150.46 ± 10% perf-sched.total_wait_and_delay.average.ms
39825 ± 20% -34.8% 25952 ± 10% perf-sched.total_wait_and_delay.count.ms
95.79 ± 24% +56.0% 149.39 ± 10% perf-sched.total_wait_time.average.ms
1.40 ± 42% -58.2% 0.59 ± 31% perf-sched.wait_and_delay.avg.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_apic_timer_interrupt.[unknown]
0.38 ± 33% -47.3% 0.20 ± 21% perf-sched.wait_and_delay.avg.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_reschedule_ipi.[unknown]
17.03 ± 56% +92.0% 32.70 ± 39% perf-sched.wait_and_delay.avg.ms.pipe_read.new_sync_read.vfs_read.ksys_read
586.50 ± 30% -62.1% 222.50 ±118% perf-sched.wait_and_delay.count.devkmsg_read.vfs_read.ksys_read.do_syscall_64
72.83 ±141% +233.0% 242.50 ± 19% perf-sched.wait_and_delay.count.preempt_schedule_common.__cond_resched.stop_one_cpu.sched_exec.bprm_execve
95.85 ±170% -90.3% 9.34 ± 48% perf-sched.wait_and_delay.max.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_reschedule_ipi.[unknown]
1710 ± 66% -96.2% 64.24 ±116% perf-sched.wait_and_delay.max.ms.exit_to_user_mode_prepare.syscall_exit_to_user_mode.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.40 ± 42% -58.2% 0.58 ± 31% perf-sched.wait_time.avg.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_apic_timer_interrupt.[unknown]
0.38 ± 33% -47.1% 0.20 ± 21% perf-sched.wait_time.avg.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_reschedule_ipi.[unknown]
9.36 ±203% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.copy_huge_page.migrate_page_copy.migrate_page
1.17 ±183% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.rmap_walk_anon.remove_migration_ptes.migrate_pages
2.18 ± 93% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.stop_one_cpu.migrate_task_to.task_numa_migrate
1.67 ± 86% -100.0% 0.00 perf-sched.wait_time.avg.ms.schedule_timeout.wait_for_completion.stop_two_cpus.migrate_swap
92.94 ±166% -88.8% 10.44 ± 85% perf-sched.wait_time.max.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_exc_page_fault.[unknown]
95.85 ±170% -90.3% 9.34 ± 48% perf-sched.wait_time.max.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_reschedule_ipi.[unknown]
341.43 ±215% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.copy_huge_page.migrate_page_copy.migrate_page
1.32 ±163% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.rmap_walk_anon.remove_migration_ptes.migrate_pages
27.99 ±153% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.stop_one_cpu.migrate_task_to.task_numa_migrate
6.19 ± 89% -100.0% 0.00 perf-sched.wait_time.max.ms.schedule_timeout.wait_for_completion.stop_two_cpus.migrate_swap
0.22 ±142% +2.8 2.99 ± 42% perf-profile.calltrace.cycles-pp.intel_idle.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry
0.24 ±141% +3.2 3.41 ± 42% perf-profile.calltrace.cycles-pp.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry.start_secondary
0.24 ±141% +3.2 3.44 ± 42% perf-profile.calltrace.cycles-pp.cpuidle_enter.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify
0.25 ±141% +3.4 3.60 ± 42% perf-profile.calltrace.cycles-pp.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify
0.25 ±141% +3.4 3.61 ± 42% perf-profile.calltrace.cycles-pp.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify
0.25 ±141% +3.4 3.61 ± 42% perf-profile.calltrace.cycles-pp.start_secondary.secondary_startup_64_no_verify
0.26 ±141% +3.4 3.67 ± 42% perf-profile.calltrace.cycles-pp.secondary_startup_64_no_verify
0.00 +0.1 0.06 ± 9% perf-profile.children.cycles-pp.load_balance
0.00 +0.1 0.06 ± 19% perf-profile.children.cycles-pp.irqentry_exit_to_user_mode
0.00 +0.1 0.06 ± 17% perf-profile.children.cycles-pp.exit_to_user_mode_prepare
0.00 +0.1 0.09 ± 21% perf-profile.children.cycles-pp.rebalance_domains
0.00 +0.1 0.10 ± 61% perf-profile.children.cycles-pp.tick_nohz_next_event
0.08 ± 50% +0.1 0.18 ± 17% perf-profile.children.cycles-pp.devkmsg_write.cold
0.08 ± 50% +0.1 0.18 ± 17% perf-profile.children.cycles-pp.devkmsg_emit
0.09 ± 52% +0.1 0.18 ± 15% perf-profile.children.cycles-pp.serial8250_console_putchar
0.09 ± 52% +0.1 0.19 ± 15% perf-profile.children.cycles-pp.uart_console_write
0.09 ± 52% +0.1 0.19 ± 15% perf-profile.children.cycles-pp.write
0.09 ± 51% +0.1 0.20 ± 15% perf-profile.children.cycles-pp.wait_for_xmitr
0.00 +0.1 0.11 ± 58% perf-profile.children.cycles-pp.tick_nohz_get_sleep_length
0.10 ± 50% +0.1 0.20 ± 16% perf-profile.children.cycles-pp.vprintk_emit
0.10 ± 49% +0.1 0.21 ± 21% perf-profile.children.cycles-pp.ksys_write
0.10 ± 49% +0.1 0.21 ± 21% perf-profile.children.cycles-pp.vfs_write
0.10 ± 49% +0.1 0.21 ± 21% perf-profile.children.cycles-pp.new_sync_write
0.09 ± 52% +0.1 0.20 ± 15% perf-profile.children.cycles-pp.console_unlock
0.09 ± 52% +0.1 0.20 ± 15% perf-profile.children.cycles-pp.serial8250_console_write
0.14 ± 16% +0.1 0.27 ± 23% perf-profile.children.cycles-pp.__softirqentry_text_start
0.18 ± 13% +0.1 0.32 ± 19% perf-profile.children.cycles-pp.irq_exit_rcu
0.00 +0.1 0.14 ± 55% perf-profile.children.cycles-pp.menu_select
0.69 ± 14% +0.2 0.88 ± 10% perf-profile.children.cycles-pp.__hrtimer_run_queues
1.29 ± 10% +0.4 1.71 ± 11% perf-profile.children.cycles-pp.sysvec_apic_timer_interrupt
0.38 ± 63% +2.7 3.05 ± 41% perf-profile.children.cycles-pp.intel_idle
0.43 ± 62% +3.1 3.49 ± 41% perf-profile.children.cycles-pp.cpuidle_enter
0.43 ± 62% +3.1 3.49 ± 41% perf-profile.children.cycles-pp.cpuidle_enter_state
0.44 ± 61% +3.2 3.61 ± 42% perf-profile.children.cycles-pp.start_secondary
0.44 ± 61% +3.2 3.67 ± 42% perf-profile.children.cycles-pp.secondary_startup_64_no_verify
0.44 ± 61% +3.2 3.67 ± 42% perf-profile.children.cycles-pp.cpu_startup_entry
0.44 ± 61% +3.2 3.67 ± 42% perf-profile.children.cycles-pp.do_idle
0.38 ± 63% +2.7 3.05 ± 41% perf-profile.self.cycles-pp.intel_idle
1.935e+10 -14.9% 1.646e+10 ± 2% perf-stat.i.branch-instructions
9.144e+08 -15.8% 7.7e+08 ± 2% perf-stat.i.cache-misses
9.366e+08 -15.3% 7.938e+08 ± 2% perf-stat.i.cache-references
3904 -7.4% 3615 ± 3% perf-stat.i.context-switches
8.60 -5.0% 8.18 ± 2% perf-stat.i.cpi
5.469e+11 -14.6% 4.67e+11 ± 2% perf-stat.i.cpu-cycles
729.58 -2.9% 708.47 perf-stat.i.cycles-between-cache-misses
2.308e+10 -14.9% 1.964e+10 ± 2% perf-stat.i.dTLB-loads
2046156 ± 9% -23.2% 1571948 ± 13% perf-stat.i.dTLB-store-misses
8.466e+09 -14.7% 7.22e+09 ± 2% perf-stat.i.dTLB-stores
89.84 -9.0 80.88 perf-stat.i.iTLB-load-miss-rate%
263296 ± 9% +123.0% 587185 ± 19% perf-stat.i.iTLB-loads
8.137e+10 -14.9% 6.925e+10 ± 2% perf-stat.i.instructions
0.15 +2.6% 0.15 perf-stat.i.ipc
2.85 -14.6% 2.44 ± 2% perf-stat.i.metric.GHz
126.49 ± 6% +190.4% 367.36 ± 6% perf-stat.i.metric.K/sec
275.22 -15.0% 233.82 ± 2% perf-stat.i.metric.M/sec
33877 ± 2% -61.8% 12941 ± 4% perf-stat.i.minor-faults
59.35 ± 3% -15.6 43.76 ± 3% perf-stat.i.node-load-miss-rate%
2576301 ± 6% -74.8% 647958 ± 6% perf-stat.i.node-load-misses
0.83 ± 13% +6.9 7.75 ± 13% perf-stat.i.node-store-miss-rate%
3938739 ± 6% +978.5% 42480928 ± 8% perf-stat.i.node-store-misses
8.997e+08 -19.7% 7.224e+08 ± 2% perf-stat.i.node-stores
33880 ± 2% -61.8% 12944 ± 4% perf-stat.i.page-faults
0.03 ± 6% +0.0 0.04 ± 7% perf-stat.overall.branch-miss-rate%
83.20 -13.9 69.27 ± 4% perf-stat.overall.iTLB-load-miss-rate%
65.41 ± 3% -33.2 32.16 ± 4% perf-stat.overall.node-load-miss-rate%
0.44 ± 6% +5.1 5.54 ± 8% perf-stat.overall.node-store-miss-rate%
1.926e+10 -14.4% 1.649e+10 ± 2% perf-stat.ps.branch-instructions
9.107e+08 -15.2% 7.718e+08 ± 2% perf-stat.ps.cache-misses
9.33e+08 -14.7% 7.954e+08 ± 2% perf-stat.ps.cache-references
3894 -7.7% 3593 ± 3% perf-stat.ps.context-switches
5.458e+11 -14.1% 4.689e+11 perf-stat.ps.cpu-cycles
2.298e+10 -14.4% 1.968e+10 ± 2% perf-stat.ps.dTLB-loads
2040735 ± 8% -22.8% 1576006 ± 12% perf-stat.ps.dTLB-store-misses
8.43e+09 -14.2% 7.234e+09 ± 2% perf-stat.ps.dTLB-stores
256776 ± 8% +121.6% 568913 ± 19% perf-stat.ps.iTLB-loads
8.101e+10 -14.3% 6.939e+10 ± 2% perf-stat.ps.instructions
34379 ± 2% -61.3% 13320 ± 4% perf-stat.ps.minor-faults
2701913 ± 6% -76.1% 645841 ± 6% perf-stat.ps.node-load-misses
3988458 ± 5% +965.6% 42500100 ± 8% perf-stat.ps.node-store-misses
8.956e+08 -19.1% 7.243e+08 ± 2% perf-stat.ps.node-stores
34381 ± 2% -61.3% 13321 ± 4% perf-stat.ps.page-faults
2.406e+13 -11.5% 2.131e+13 ± 2% perf-stat.total.instructions
13281 ± 10% +28.6% 17086 ± 5% softirqs.CPU0.SCHED
8734 ± 14% +45.5% 12709 ± 11% softirqs.CPU1.SCHED
6931 ± 15% +90.2% 13181 ± 7% softirqs.CPU120.SCHED
7210 ± 13% +84.1% 13274 ± 8% softirqs.CPU121.SCHED
6879 ± 16% +89.8% 13054 ± 10% softirqs.CPU122.SCHED
6827 ± 14% +97.0% 13451 ± 13% softirqs.CPU123.SCHED
6820 ± 14% +96.1% 13371 ± 10% softirqs.CPU124.SCHED
6866 ± 15% +94.3% 13340 ± 10% softirqs.CPU125.SCHED
6790 ± 14% +94.6% 13216 ± 12% softirqs.CPU126.SCHED
6882 ± 13% +94.5% 13384 ± 11% softirqs.CPU127.SCHED
7041 ± 17% +85.8% 13081 ± 10% softirqs.CPU128.SCHED
7089 ± 16% +86.3% 13205 ± 10% softirqs.CPU129.SCHED
6819 ± 14% +93.9% 13222 ± 11% softirqs.CPU130.SCHED
6845 ± 14% +97.4% 13512 ± 11% softirqs.CPU131.SCHED
6799 ± 15% +94.6% 13229 ± 14% softirqs.CPU132.SCHED
6814 ± 14% +95.1% 13295 ± 11% softirqs.CPU133.SCHED
6902 ± 13% +90.2% 13130 ± 14% softirqs.CPU134.SCHED
6811 ± 14% +92.1% 13083 ± 13% softirqs.CPU135.SCHED
6817 ± 15% +98.4% 13525 ± 12% softirqs.CPU136.SCHED
6752 ± 15% +95.9% 13229 ± 14% softirqs.CPU137.SCHED
6782 ± 14% +98.2% 13442 ± 14% softirqs.CPU138.SCHED
6787 ± 14% +93.4% 13130 ± 14% softirqs.CPU139.SCHED
6784 ± 14% +96.8% 13349 ± 12% softirqs.CPU140.SCHED
6847 ± 13% +90.2% 13025 ± 14% softirqs.CPU141.SCHED
6819 ± 15% +91.1% 13028 ± 14% softirqs.CPU142.SCHED
6966 ± 14% +88.3% 13118 ± 14% softirqs.CPU143.SCHED
7573 ± 7% +73.9% 13172 ± 12% softirqs.CPU144.SCHED
7691 ± 8% +71.7% 13210 ± 16% softirqs.CPU146.SCHED
7671 ± 8% +71.6% 13165 ± 17% softirqs.CPU148.SCHED
7773 ± 7% +76.2% 13696 ± 11% softirqs.CPU159.SCHED
8036 ± 7% +49.3% 11995 ± 14% softirqs.CPU168.SCHED
7981 ± 7% +55.5% 12409 ± 19% softirqs.CPU169.SCHED
8003 ± 7% +55.6% 12451 ± 16% softirqs.CPU170.SCHED
7581 ± 6% +65.5% 12543 ± 16% softirqs.CPU191.SCHED
8101 ± 17% +57.4% 12752 ± 16% softirqs.CPU2.SCHED
7027 ± 9% +50.3% 10561 ± 6% softirqs.CPU24.SCHED
6931 ± 13% +75.2% 12145 ± 3% softirqs.CPU25.SCHED
7329 ± 23% +81.4% 13295 ± 6% softirqs.CPU26.SCHED
6843 ± 14% +95.0% 13340 ± 4% softirqs.CPU27.SCHED
6795 ± 12% +100.9% 13652 ± 7% softirqs.CPU28.SCHED
6796 ± 14% +97.4% 13418 ± 8% softirqs.CPU29.SCHED
7139 ± 11% +77.5% 12670 ± 24% softirqs.CPU3.SCHED
6830 ± 14% +98.1% 13531 ± 8% softirqs.CPU30.SCHED
6758 ± 13% +102.3% 13669 ± 6% softirqs.CPU31.SCHED
6456 ± 15% +104.2% 13182 ± 9% softirqs.CPU32.SCHED
6879 ± 14% +94.3% 13364 ± 12% softirqs.CPU33.SCHED
6828 ± 13% +94.5% 13277 ± 11% softirqs.CPU34.SCHED
6782 ± 13% +94.8% 13211 ± 13% softirqs.CPU35.SCHED
6780 ± 14% +96.3% 13309 ± 11% softirqs.CPU36.SCHED
6779 ± 13% +87.7% 12721 ± 12% softirqs.CPU37.SCHED
6734 ± 14% +96.5% 13234 ± 10% softirqs.CPU38.SCHED
6841 ± 13% +95.4% 13368 ± 13% softirqs.CPU39.SCHED
6839 ± 13% +88.3% 12875 ± 10% softirqs.CPU40.SCHED
6714 ± 14% +94.6% 13062 ± 11% softirqs.CPU41.SCHED
7162 ± 10% +82.8% 13095 ± 13% softirqs.CPU42.SCHED
6782 ± 14% +93.5% 13126 ± 13% softirqs.CPU43.SCHED
6750 ± 13% +94.2% 13109 ± 14% softirqs.CPU44.SCHED
6769 ± 13% +97.9% 13397 ± 13% softirqs.CPU45.SCHED
6800 ± 15% +91.3% 13009 ± 17% softirqs.CPU46.SCHED
6755 ± 14% +95.5% 13209 ± 13% softirqs.CPU47.SCHED
7404 ± 5% +43.2% 10604 ± 3% softirqs.CPU48.SCHED
7766 ± 7% +55.9% 12110 ± 4% softirqs.CPU49.SCHED
7568 ± 8% +71.6% 12988 ± 9% softirqs.CPU50.SCHED
7348 ± 8% +85.3% 13613 ± 17% softirqs.CPU51.SCHED
7612 ± 8% +74.2% 13261 ± 17% softirqs.CPU52.SCHED
7671 ± 8% +73.4% 13300 ± 12% softirqs.CPU53.SCHED
7653 ± 8% +73.9% 13306 ± 14% softirqs.CPU54.SCHED
7624 ± 8% +75.2% 13356 ± 16% softirqs.CPU55.SCHED
7545 ± 9% +73.6% 13100 ± 14% softirqs.CPU58.SCHED
7666 ± 7% +70.2% 13049 ± 18% softirqs.CPU65.SCHED
7669 ± 8% +47.1% 11285 ± 5% softirqs.CPU72.SCHED
7753 ± 7% +52.2% 11799 ± 5% softirqs.CPU73.SCHED
7922 ± 7% +56.2% 12372 ± 15% softirqs.CPU74.SCHED
7953 ± 6% +56.7% 12463 ± 16% softirqs.CPU75.SCHED
7964 ± 7% +53.0% 12187 ± 15% softirqs.CPU76.SCHED
7911 ± 6% +54.0% 12182 ± 18% softirqs.CPU77.SCHED
1464438 ± 6% +64.0% 2401987 ± 4% softirqs.SCHED
7857 ± 6% -27.8% 5673 ± 24% interrupts.CPU0.NMI:Non-maskable_interrupts
7857 ± 6% -27.8% 5673 ± 24% interrupts.CPU0.PMI:Performance_monitoring_interrupts
970.17 ± 35% -37.8% 603.67 ± 7% interrupts.CPU0.RES:Rescheduling_interrupts
7628 ± 8% -27.0% 5572 ± 27% interrupts.CPU1.NMI:Non-maskable_interrupts
7628 ± 8% -27.0% 5572 ± 27% interrupts.CPU1.PMI:Performance_monitoring_interrupts
367.50 ± 35% -38.7% 225.17 ± 16% interrupts.CPU100.RES:Rescheduling_interrupts
534.17 ± 43% -57.4% 227.50 ± 11% interrupts.CPU101.RES:Rescheduling_interrupts
1099 ±135% -80.7% 212.50 ± 11% interrupts.CPU102.RES:Rescheduling_interrupts
7737 ± 6% -48.2% 4004 ± 36% interrupts.CPU107.NMI:Non-maskable_interrupts
7737 ± 6% -48.2% 4004 ± 36% interrupts.CPU107.PMI:Performance_monitoring_interrupts
7692 ± 6% -52.5% 3657 ± 22% interrupts.CPU112.NMI:Non-maskable_interrupts
7692 ± 6% -52.5% 3657 ± 22% interrupts.CPU112.PMI:Performance_monitoring_interrupts
392.33 ± 51% -46.2% 211.17 ± 10% interrupts.CPU113.RES:Rescheduling_interrupts
350.17 ± 36% -38.8% 214.17 ± 14% interrupts.CPU114.RES:Rescheduling_interrupts
434.17 ± 46% -42.9% 247.83 ± 20% interrupts.CPU119.RES:Rescheduling_interrupts
8256 ± 3% -51.0% 4048 ± 50% interrupts.CPU120.NMI:Non-maskable_interrupts
8256 ± 3% -51.0% 4048 ± 50% interrupts.CPU120.PMI:Performance_monitoring_interrupts
8184 ± 4% -47.5% 4297 ± 41% interrupts.CPU121.NMI:Non-maskable_interrupts
8184 ± 4% -47.5% 4297 ± 41% interrupts.CPU121.PMI:Performance_monitoring_interrupts
8189 ± 4% -46.9% 4347 ± 40% interrupts.CPU122.NMI:Non-maskable_interrupts
8189 ± 4% -46.9% 4347 ± 40% interrupts.CPU122.PMI:Performance_monitoring_interrupts
560.00 ± 83% -66.3% 188.67 ± 15% interrupts.CPU122.RES:Rescheduling_interrupts
8178 ± 3% -48.9% 4177 ± 43% interrupts.CPU123.NMI:Non-maskable_interrupts
8178 ± 3% -48.9% 4177 ± 43% interrupts.CPU123.PMI:Performance_monitoring_interrupts
8152 ± 3% -43.5% 4607 ± 36% interrupts.CPU124.NMI:Non-maskable_interrupts
8152 ± 3% -43.5% 4607 ± 36% interrupts.CPU124.PMI:Performance_monitoring_interrupts
8186 ± 3% -48.6% 4207 ± 45% interrupts.CPU126.NMI:Non-maskable_interrupts
8186 ± 3% -48.6% 4207 ± 45% interrupts.CPU126.PMI:Performance_monitoring_interrupts
8167 ± 4% -49.6% 4120 ± 41% interrupts.CPU127.NMI:Non-maskable_interrupts
8167 ± 4% -49.6% 4120 ± 41% interrupts.CPU127.PMI:Performance_monitoring_interrupts
318.17 ± 42% -40.1% 190.50 ± 13% interrupts.CPU128.RES:Rescheduling_interrupts
8156 ± 4% -43.7% 4593 ± 37% interrupts.CPU129.NMI:Non-maskable_interrupts
8156 ± 4% -43.7% 4593 ± 37% interrupts.CPU129.PMI:Performance_monitoring_interrupts
8159 ± 4% -44.2% 4552 ± 46% interrupts.CPU130.NMI:Non-maskable_interrupts
8159 ± 4% -44.2% 4552 ± 46% interrupts.CPU130.PMI:Performance_monitoring_interrupts
8162 ± 4% -43.2% 4638 ± 37% interrupts.CPU131.NMI:Non-maskable_interrupts
8162 ± 4% -43.2% 4638 ± 37% interrupts.CPU131.PMI:Performance_monitoring_interrupts
8130 ± 5% -48.0% 4230 ± 45% interrupts.CPU132.NMI:Non-maskable_interrupts
8130 ± 5% -48.0% 4230 ± 45% interrupts.CPU132.PMI:Performance_monitoring_interrupts
1184 ± 4% +21.8% 1442 ± 27% interrupts.CPU133.CAL:Function_call_interrupts
7400 ± 21% -54.4% 3375 ± 20% interrupts.CPU133.NMI:Non-maskable_interrupts
7400 ± 21% -54.4% 3375 ± 20% interrupts.CPU133.PMI:Performance_monitoring_interrupts
7408 ± 20% -53.7% 3432 ± 23% interrupts.CPU134.NMI:Non-maskable_interrupts
7408 ± 20% -53.7% 3432 ± 23% interrupts.CPU134.PMI:Performance_monitoring_interrupts
1509 ± 17% -20.5% 1200 ± 4% interrupts.CPU135.CAL:Function_call_interrupts
7461 ± 21% -52.9% 3510 ± 24% interrupts.CPU138.NMI:Non-maskable_interrupts
7461 ± 21% -52.9% 3510 ± 24% interrupts.CPU138.PMI:Performance_monitoring_interrupts
370.33 ± 66% -40.0% 222.33 ± 13% interrupts.CPU139.RES:Rescheduling_interrupts
8131 ± 4% -51.4% 3950 ± 17% interrupts.CPU142.NMI:Non-maskable_interrupts
8131 ± 4% -51.4% 3950 ± 17% interrupts.CPU142.PMI:Performance_monitoring_interrupts
1577 ± 18% -24.3% 1194 ± 4% interrupts.CPU147.CAL:Function_call_interrupts
7255 ± 20% -45.3% 3971 ± 49% interrupts.CPU147.NMI:Non-maskable_interrupts
7255 ± 20% -45.3% 3971 ± 49% interrupts.CPU147.PMI:Performance_monitoring_interrupts
7943 ± 5% -44.4% 4418 ± 37% interrupts.CPU148.NMI:Non-maskable_interrupts
7943 ± 5% -44.4% 4418 ± 37% interrupts.CPU148.PMI:Performance_monitoring_interrupts
7231 ± 20% -48.8% 3704 ± 52% interrupts.CPU150.NMI:Non-maskable_interrupts
7231 ± 20% -48.8% 3704 ± 52% interrupts.CPU150.PMI:Performance_monitoring_interrupts
270.83 ± 28% -30.4% 188.50 ± 17% interrupts.CPU152.RES:Rescheduling_interrupts
7228 ± 21% -56.3% 3156 ± 33% interrupts.CPU160.NMI:Non-maskable_interrupts
7228 ± 21% -56.3% 3156 ± 33% interrupts.CPU160.PMI:Performance_monitoring_interrupts
2620 ± 38% -46.4% 1404 ± 22% interrupts.CPU169.CAL:Function_call_interrupts
1668 ± 49% -29.1% 1183 ± 3% interrupts.CPU176.CAL:Function_call_interrupts
237.00 ± 15% -26.4% 174.50 ± 14% interrupts.CPU178.RES:Rescheduling_interrupts
477.17 ± 46% -40.5% 283.83 ± 6% interrupts.CPU18.RES:Rescheduling_interrupts
378.17 ± 26% -23.3% 290.00 ± 8% interrupts.CPU19.RES:Rescheduling_interrupts
1409 ± 16% -21.5% 1106 ± 6% interrupts.CPU191.CAL:Function_call_interrupts
519.50 ± 31% -34.0% 343.00 ± 19% interrupts.CPU2.RES:Rescheduling_interrupts
477.00 ± 26% -42.8% 273.00 ± 6% interrupts.CPU22.RES:Rescheduling_interrupts
581.33 ± 44% -48.7% 298.33 ± 15% interrupts.CPU23.RES:Rescheduling_interrupts
1718 ± 18% -15.6% 1450 ± 3% interrupts.CPU24.CAL:Function_call_interrupts
8257 ± 3% -21.1% 6511 ± 6% interrupts.CPU24.NMI:Non-maskable_interrupts
8257 ± 3% -21.1% 6511 ± 6% interrupts.CPU24.PMI:Performance_monitoring_interrupts
741.17 ± 86% -66.6% 247.50 ± 13% interrupts.CPU25.RES:Rescheduling_interrupts
8225 ± 3% -45.6% 4472 ± 30% interrupts.CPU26.NMI:Non-maskable_interrupts
8225 ± 3% -45.6% 4472 ± 30% interrupts.CPU26.PMI:Performance_monitoring_interrupts
1430 ±149% -83.3% 238.67 ± 11% interrupts.CPU26.RES:Rescheduling_interrupts
8192 ± 3% -49.1% 4168 ± 39% interrupts.CPU27.NMI:Non-maskable_interrupts
8192 ± 3% -49.1% 4168 ± 39% interrupts.CPU27.PMI:Performance_monitoring_interrupts
8177 ± 4% -48.1% 4240 ± 40% interrupts.CPU29.NMI:Non-maskable_interrupts
8177 ± 4% -48.1% 4240 ± 40% interrupts.CPU29.PMI:Performance_monitoring_interrupts
519.33 ± 77% -52.5% 246.67 ± 5% interrupts.CPU29.RES:Rescheduling_interrupts
8211 ± 3% -49.3% 4163 ± 42% interrupts.CPU30.NMI:Non-maskable_interrupts
8211 ± 3% -49.3% 4163 ± 42% interrupts.CPU30.PMI:Performance_monitoring_interrupts
329.33 ± 10% -27.3% 239.33 ± 16% interrupts.CPU31.RES:Rescheduling_interrupts
8187 ± 3% -47.3% 4313 ± 39% interrupts.CPU32.NMI:Non-maskable_interrupts
8187 ± 3% -47.3% 4313 ± 39% interrupts.CPU32.PMI:Performance_monitoring_interrupts
369.00 ± 31% -40.2% 220.83 ± 20% interrupts.CPU32.RES:Rescheduling_interrupts
8203 ± 3% -47.7% 4290 ± 44% interrupts.CPU33.NMI:Non-maskable_interrupts
8203 ± 3% -47.7% 4290 ± 44% interrupts.CPU33.PMI:Performance_monitoring_interrupts
8208 ± 3% -43.7% 4617 ± 34% interrupts.CPU34.NMI:Non-maskable_interrupts
8208 ± 3% -43.7% 4617 ± 34% interrupts.CPU34.PMI:Performance_monitoring_interrupts
8200 ± 4% -56.7% 3549 ± 26% interrupts.CPU35.NMI:Non-maskable_interrupts
8200 ± 4% -56.7% 3549 ± 26% interrupts.CPU35.PMI:Performance_monitoring_interrupts
8173 ± 4% -52.6% 3877 ± 16% interrupts.CPU36.NMI:Non-maskable_interrupts
8173 ± 4% -52.6% 3877 ± 16% interrupts.CPU36.PMI:Performance_monitoring_interrupts
8176 ± 4% -43.4% 4626 ± 38% interrupts.CPU37.NMI:Non-maskable_interrupts
8176 ± 4% -43.4% 4626 ± 38% interrupts.CPU37.PMI:Performance_monitoring_interrupts
8207 ± 4% -46.3% 4405 ± 39% interrupts.CPU38.NMI:Non-maskable_interrupts
8207 ± 4% -46.3% 4405 ± 39% interrupts.CPU38.PMI:Performance_monitoring_interrupts
8163 ± 4% -45.3% 4467 ± 38% interrupts.CPU39.NMI:Non-maskable_interrupts
8163 ± 4% -45.3% 4467 ± 38% interrupts.CPU39.PMI:Performance_monitoring_interrupts
324.50 ± 23% -28.5% 232.17 ± 14% interrupts.CPU40.RES:Rescheduling_interrupts
8162 ± 5% -55.1% 3664 ± 30% interrupts.CPU43.NMI:Non-maskable_interrupts
8162 ± 5% -55.1% 3664 ± 30% interrupts.CPU43.PMI:Performance_monitoring_interrupts
8183 ± 4% -43.3% 4643 ± 37% interrupts.CPU44.NMI:Non-maskable_interrupts
8183 ± 4% -43.3% 4643 ± 37% interrupts.CPU44.PMI:Performance_monitoring_interrupts
8167 ± 4% -43.2% 4642 ± 37% interrupts.CPU46.NMI:Non-maskable_interrupts
8167 ± 4% -43.2% 4642 ± 37% interrupts.CPU46.PMI:Performance_monitoring_interrupts
7992 ± 4% -27.1% 5827 ± 21% interrupts.CPU48.NMI:Non-maskable_interrupts
7992 ± 4% -27.1% 5827 ± 21% interrupts.CPU48.PMI:Performance_monitoring_interrupts
7948 ± 5% -42.7% 4558 ± 34% interrupts.CPU50.NMI:Non-maskable_interrupts
7948 ± 5% -42.7% 4558 ± 34% interrupts.CPU50.PMI:Performance_monitoring_interrupts
7913 ± 5% -49.5% 3993 ± 55% interrupts.CPU51.NMI:Non-maskable_interrupts
7913 ± 5% -49.5% 3993 ± 55% interrupts.CPU51.PMI:Performance_monitoring_interrupts
7935 ± 5% -52.6% 3760 ± 58% interrupts.CPU52.NMI:Non-maskable_interrupts
7935 ± 5% -52.6% 3760 ± 58% interrupts.CPU52.PMI:Performance_monitoring_interrupts
7902 ± 5% -52.8% 3728 ± 61% interrupts.CPU53.NMI:Non-maskable_interrupts
7902 ± 5% -52.8% 3728 ± 61% interrupts.CPU53.PMI:Performance_monitoring_interrupts
7918 ± 5% -50.2% 3944 ± 42% interrupts.CPU54.NMI:Non-maskable_interrupts
7918 ± 5% -50.2% 3944 ± 42% interrupts.CPU54.PMI:Performance_monitoring_interrupts
249.83 ± 9% -22.0% 194.83 ± 16% interrupts.CPU59.RES:Rescheduling_interrupts
7899 ± 5% -49.5% 3992 ± 58% interrupts.CPU64.NMI:Non-maskable_interrupts
7899 ± 5% -49.5% 3992 ± 58% interrupts.CPU64.PMI:Performance_monitoring_interrupts
360.17 ± 48% -39.1% 219.33 ± 13% interrupts.CPU64.RES:Rescheduling_interrupts
330.17 ± 32% -35.9% 211.50 ± 10% interrupts.CPU71.RES:Rescheduling_interrupts
7764 ± 5% -31.1% 5348 ± 30% interrupts.CPU74.NMI:Non-maskable_interrupts
7764 ± 5% -31.1% 5348 ± 30% interrupts.CPU74.PMI:Performance_monitoring_interrupts
7820 ± 4% -30.3% 5448 ± 15% interrupts.CPU77.NMI:Non-maskable_interrupts
7820 ± 4% -30.3% 5448 ± 15% interrupts.CPU77.PMI:Performance_monitoring_interrupts
7745 ± 5% -30.7% 5364 ± 21% interrupts.CPU88.NMI:Non-maskable_interrupts
7745 ± 5% -30.7% 5364 ± 21% interrupts.CPU88.PMI:Performance_monitoring_interrupts
3245 ± 10% -17.0% 2694 ± 10% interrupts.CPU95.CAL:Function_call_interrupts
1592 ± 3% -21.1% 1256 ± 10% interrupts.CPU95.RES:Rescheduling_interrupts
1669 ± 18% -18.7% 1357 ± 2% interrupts.CPU96.CAL:Function_call_interrupts
331.00 ± 15% -36.1% 211.50 ± 22% interrupts.IWI:IRQ_work_interrupts
1409742 ± 6% -36.2% 899323 ± 15% interrupts.NMI:Non-maskable_interrupts
1409742 ± 6% -36.2% 899323 ± 15% interrupts.PMI:Performance_monitoring_interrupts
68748 ± 7% -29.6% 48431 ± 6% interrupts.RES:Rescheduling_interrupts
vm-scalability.time.system_time
4600 +--------------------------------------------------------------------+
|. + +. .++.+ +. .++.+ .+ +.+ .+ ++. .+. .++.++.+.++.++.++.|
4400 |-+ ++.+ +.+ + + + ++ ++ |
| |
4200 |-+ |
| |
4000 |-+ |
| O O O |
3800 |-+ O O O O O O O O O |
| O OO O O O O O O O |
3600 |-+O O O O O O O |
| O O O O O O |
3400 |-+ O |
| O |
3200 +--------------------------------------------------------------------+
vm-scalability.time.percent_of_cpu_this_job_got
18000 +-------------------------------------------------------------------+
|+ +.+ + ++ ++ + + +.+ + +.+ + +.+ + +. +.+ |
17500 |-+ + + + + +.++.++.++. : +.|
17000 |-+ + |
| |
16500 |-+ |
16000 |-+ O O OO O O O |
| O O OO O O |
15500 |-O O OO O O O |
15000 |-+ O O O O |
| O O O O O O O |
14500 |-+ O O O |
14000 |-+ O O |
| O |
13500 +-------------------------------------------------------------------+
vm-scalability.time.minor_page_faults
1e+07 +-------------------------------------------------------------------+
| +. +. +. + +.+ .+. .|
9e+06 |.++.++.+ + ++.+.+ + + +.++.++.+.++.++.++. : + ++.+ .++.++.++ |
8e+06 |-+ + + + |
| |
7e+06 |-+ |
| |
6e+06 |-+ |
| |
5e+06 |-+ |
4e+06 |-+ |
| O O O OO |
3e+06 |-O OO O O O O OO O O OO O OO OO O O O O |
| O O OO O O O O O |
2e+06 +-------------------------------------------------------------------+
vm-scalability.median
47000 +-------------------------------------------------------------------+
| O O OO O O |
46500 |-O O O O O O |
| O O O O O O OO O |
46000 |-+ O O O O OO O OO O |
| O O O O O O |
45500 |-+ |
| |
45000 |-+ |
| |
44500 |-+ |
| |
44000 |.+ .++.++. .+.+ .++. +. .+. +.|
| ++.++ ++ + + ++.++ ++.++.++.++.++.+.++.++.++.++.+ |
43500 +-------------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
---
0DAY/LKP+ Test Infrastructure Open Source Technology Center
https://lists.01.org/hyperkitty/list/lkp@lists.01.org Intel Corporation
Thanks,
Oliver Sang
View attachment "config-5.15.0-rc4-00039-g76ff9ff49a47" of type "text/plain" (176822 bytes)
View attachment "job-script" of type "text/plain" (8066 bytes)
View attachment "job.yaml" of type "text/plain" (5320 bytes)
View attachment "reproduce" of type "text/plain" (2038 bytes)
Powered by blists - more mailing lists