lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Mon, 28 Sep 2015 14:49:32 +0800 From: kernel test robot <ying.huang@...el.com> TO: Jeff Layton <jeff.layton@...marydata.com> CC: LKML <linux-kernel@...r.kernel.org> Subject: [lkp] [nfsd] 4aac1bf05b: -2.9% fsmark.files_per_sec FYI, we noticed the below changes on ========================================================================================= tbox_group/testcase/rootfs/kconfig/compiler/cpufreq_governor/iterations/nr_threads/disk/fs/fs2/filesize/test_size/sync_method/nr_directories/nr_files_per_directory: lkp-ne04/fsmark/debian-x86_64-2015-02-07.cgz/x86_64-rhel/gcc-4.9/performance/1x/32t/1HDD/xfs/nfsv4/5K/400M/fsyncBeforeClose/16d/256fpd commit: cd2d35ff27c4fda9ba73b0aa84313e8e20ce4d2c 4aac1bf05b053a201a4b392dd9a684fb2b7e6103 cd2d35ff27c4fda9 4aac1bf05b053a201a4b392dd9 ---------------- -------------------------- %stddev %change %stddev \ | \ 14415356 ± 0% +2.6% 14788625 ± 1% fsmark.app_overhead 441.60 ± 0% -2.9% 428.80 ± 0% fsmark.files_per_sec 185.78 ± 0% +2.9% 191.26 ± 0% fsmark.time.elapsed_time 185.78 ± 0% +2.9% 191.26 ± 0% fsmark.time.elapsed_time.max 97472 ± 0% -2.8% 94713 ± 0% fsmark.time.involuntary_context_switches 3077117 ± 95% +251.2% 10805440 ±112% latency_stats.sum.nfs_wait_on_request.nfs_updatepage.nfs_write_end.generic_perform_write.__generic_file_write_iter.generic_file_write_iter.nfs_file_write.__vfs_write.vfs_write.SyS_write.entry_SYSCALL_64_fastpath 12999 ± 0% +32.9% 17276 ± 0% proc-vmstat.nr_slab_unreclaimable 64568 ± 4% -14.8% 55032 ± 0% softirqs.RCU 51999 ± 0% +32.9% 69111 ± 0% meminfo.SUnreclaim 159615 ± 0% +13.5% 181115 ± 0% meminfo.Slab 3.75 ± 0% +3.3% 3.88 ± 1% turbostat.%Busy 77.25 ± 0% +6.5% 82.25 ± 0% turbostat.Avg_MHz 30813025 ± 2% -14.5% 26338527 ± 9% cpuidle.C1E-NHM.time 164180 ± 0% -28.9% 116758 ± 7% cpuidle.C1E-NHM.usage 1738 ± 2% -81.2% 326.75 ± 4% cpuidle.POLL.usage 29979 ± 2% +44.3% 43273 ± 4% numa-meminfo.node0.SUnreclaim 94889 ± 0% +19.8% 113668 ± 2% numa-meminfo.node0.Slab 22033 ± 3% +17.3% 25835 ± 7% numa-meminfo.node1.SUnreclaim 7404 ± 1% -2.7% 7206 ± 0% vmstat.io.bo 27121 ± 0% -4.8% 25817 ± 0% vmstat.system.cs 3025 ± 0% -13.5% 2615 ± 0% vmstat.system.in 50126 ± 1% +11.5% 55893 ± 1% numa-vmstat.node0.nr_dirtied 7494 ± 2% +44.3% 10818 ± 4% numa-vmstat.node0.nr_slab_unreclaimable 50088 ± 1% +11.6% 55900 ± 1% numa-vmstat.node0.nr_written 5507 ± 3% +17.3% 6458 ± 7% numa-vmstat.node1.nr_slab_unreclaimable 7164 ± 2% +275.2% 26885 ± 0% slabinfo.kmalloc-16.active_objs 7164 ± 2% +275.3% 26885 ± 0% slabinfo.kmalloc-16.num_objs 7367 ± 1% +787.7% 65401 ± 0% slabinfo.kmalloc-192.active_objs 179.00 ± 1% +771.8% 1560 ± 0% slabinfo.kmalloc-192.active_slabs 7537 ± 1% +770.0% 65572 ± 0% slabinfo.kmalloc-192.num_objs 179.00 ± 1% +771.8% 1560 ± 0% slabinfo.kmalloc-192.num_slabs 3631 ± 7% +522.3% 22600 ± 0% slabinfo.kmalloc-256.active_objs 145.50 ± 4% +398.1% 724.75 ± 0% slabinfo.kmalloc-256.active_slabs 4667 ± 4% +397.3% 23210 ± 0% slabinfo.kmalloc-256.num_objs 145.50 ± 4% +398.1% 724.75 ± 0% slabinfo.kmalloc-256.num_slabs 17448 ± 2% +75.6% 30643 ± 0% slabinfo.kmalloc-32.active_objs 137.50 ± 2% +76.5% 242.75 ± 0% slabinfo.kmalloc-32.active_slabs 17651 ± 2% +76.4% 31139 ± 0% slabinfo.kmalloc-32.num_objs 137.50 ± 2% +76.5% 242.75 ± 0% slabinfo.kmalloc-32.num_slabs 2387 ± 3% -10.7% 2132 ± 8% slabinfo.kmalloc-512.active_objs 491.25 ± 3% +33.9% 658.00 ± 11% slabinfo.numa_policy.active_objs 491.25 ± 3% +33.9% 658.00 ± 11% slabinfo.numa_policy.num_objs 2128 ± 9% +59.3% 3391 ± 34% sched_debug.cfs_rq[10]:/.exec_clock 18088 ± 17% +47.0% 26582 ± 29% sched_debug.cfs_rq[10]:/.min_vruntime 4326 ± 11% -22.2% 3368 ± 18% sched_debug.cfs_rq[5]:/.exec_clock 1459 ± 1% -10.8% 1302 ± 3% sched_debug.cpu#0.nr_uninterruptible 122217 ± 7% -18.6% 99447 ± 2% sched_debug.cpu#1.nr_switches 122732 ± 8% -18.5% 99972 ± 2% sched_debug.cpu#1.sched_count 45603 ± 10% -20.1% 36442 ± 2% sched_debug.cpu#1.sched_goidle 27004 ± 3% -18.9% 21895 ± 5% sched_debug.cpu#1.ttwu_local 15469 ± 5% +17.2% 18132 ± 6% sched_debug.cpu#10.nr_load_updates 78564 ± 8% +26.6% 99492 ± 5% sched_debug.cpu#10.nr_switches 78605 ± 8% +26.7% 99557 ± 4% sched_debug.cpu#10.sched_count 27470 ± 9% +24.7% 34268 ± 7% sched_debug.cpu#10.sched_goidle 38215 ± 1% +37.4% 52499 ± 13% sched_debug.cpu#10.ttwu_count 14816 ± 5% +22.8% 18196 ± 2% sched_debug.cpu#10.ttwu_local 19690 ± 21% -29.9% 13802 ± 15% sched_debug.cpu#11.nr_switches 54.25 ± 2% -47.5% 28.50 ± 25% sched_debug.cpu#11.nr_uninterruptible 19721 ± 21% -29.9% 13828 ± 15% sched_debug.cpu#11.sched_count 14545 ± 2% +15.4% 16779 ± 4% sched_debug.cpu#12.nr_load_updates 72087 ± 11% +27.9% 92204 ± 7% sched_debug.cpu#12.nr_switches 72126 ± 11% +28.1% 92422 ± 7% sched_debug.cpu#12.sched_count 25418 ± 13% +24.4% 31626 ± 7% sched_debug.cpu#12.sched_goidle 33399 ± 15% +38.5% 46255 ± 13% sched_debug.cpu#12.ttwu_count 51.25 ± 10% -39.0% 31.25 ± 21% sched_debug.cpu#13.nr_uninterruptible 2593 ± 11% -21.8% 2028 ± 10% sched_debug.cpu#13.ttwu_local 71266 ± 3% +20.1% 85620 ± 5% sched_debug.cpu#14.nr_switches 71306 ± 3% +20.4% 85827 ± 5% sched_debug.cpu#14.sched_count 24634 ± 3% +18.8% 29259 ± 4% sched_debug.cpu#14.sched_goidle 34625 ± 11% +19.9% 41506 ± 11% sched_debug.cpu#14.ttwu_count 13866 ± 3% +20.6% 16726 ± 5% sched_debug.cpu#14.ttwu_local 12683 ± 4% -14.7% 10817 ± 2% sched_debug.cpu#15.nr_load_updates 49.75 ± 6% -46.2% 26.75 ± 28% sched_debug.cpu#15.nr_uninterruptible 3374 ± 12% -28.1% 2427 ± 18% sched_debug.cpu#15.ttwu_local 186563 ± 5% -12.1% 163975 ± 4% sched_debug.cpu#2.nr_switches -1324 ± -2% -16.0% -1111 ± -1% sched_debug.cpu#2.nr_uninterruptible 187499 ± 5% -11.2% 166447 ± 4% sched_debug.cpu#2.sched_count 67465 ± 7% -13.6% 58308 ± 6% sched_debug.cpu#2.sched_goidle 36525 ± 4% -14.6% 31193 ± 1% sched_debug.cpu#2.ttwu_local 23697 ± 5% -13.2% 20572 ± 9% sched_debug.cpu#3.nr_load_updates 128070 ± 1% -22.9% 98687 ± 5% sched_debug.cpu#3.nr_switches 129859 ± 2% -23.5% 99357 ± 4% sched_debug.cpu#3.sched_count 48833 ± 1% -23.7% 37243 ± 6% sched_debug.cpu#3.sched_goidle 61622 ± 3% -24.2% 46694 ± 5% sched_debug.cpu#3.ttwu_count 27510 ± 7% -20.6% 21840 ± 8% sched_debug.cpu#3.ttwu_local 81675 ± 7% -13.6% 70536 ± 1% sched_debug.cpu#4.ttwu_count 34076 ± 3% -12.9% 29683 ± 1% sched_debug.cpu#4.ttwu_local 124470 ± 4% -14.1% 106865 ± 8% sched_debug.cpu#5.sched_count 62502 ± 3% -20.8% 49519 ± 9% sched_debug.cpu#5.ttwu_count 26562 ± 0% -17.7% 21853 ± 10% sched_debug.cpu#5.ttwu_local 181661 ± 10% -15.1% 154229 ± 6% sched_debug.cpu#6.nr_switches 181937 ± 10% -13.5% 157379 ± 6% sched_debug.cpu#6.sched_count 66672 ± 14% -16.6% 55632 ± 9% sched_debug.cpu#6.sched_goidle 78296 ± 2% -10.2% 70346 ± 6% sched_debug.cpu#6.ttwu_count 33536 ± 1% -14.4% 28696 ± 1% sched_debug.cpu#6.ttwu_local 131463 ± 6% -17.0% 109140 ± 4% sched_debug.cpu#7.nr_switches -32.25 ±-58% -100.8% 0.25 ±9467% sched_debug.cpu#7.nr_uninterruptible 133606 ± 7% -17.2% 110671 ± 4% sched_debug.cpu#7.sched_count 50986 ± 7% -16.6% 42525 ± 6% sched_debug.cpu#7.sched_goidle 61388 ± 2% -19.8% 49213 ± 5% sched_debug.cpu#7.ttwu_count 26637 ± 2% -21.8% 20837 ± 3% sched_debug.cpu#7.ttwu_local 12312 ± 3% +9.4% 13474 ± 4% sched_debug.cpu#8.nr_load_updates 53.50 ± 6% -44.9% 29.50 ± 27% sched_debug.cpu#9.nr_uninterruptible 2724 ± 15% -23.7% 2078 ± 26% sched_debug.cpu#9.ttwu_local lkp-ne04: Nehalem-EP Memory: 12G cpuidle.POLL.usage 1800 ++----------*-----------------------------*-----*-----*--------*--*--+ *..*..*..*. .*..*..*..*..*..*..*..*. *. *. *..*. * 1600 ++ *. | 1400 ++ | | | 1200 ++ | | | 1000 ++ | | | 800 ++ | 600 ++ | | | 400 O+ | | O O O O O O O O O O O O O O O O O O O O O O | 200 ++-------------------------------------------------------------------+ cpuidle.C1E-NHM.usage 190000 ++-----------------------------------------------------*-----------+ | : : | 180000 ++ : : | 170000 ++ .*. : : | | .*.. .*.. .*. .*.. .*..*..*..*. * *..*..*..* 160000 *+ *. *. *..*..*. *..*..*. | 150000 ++ | | | 140000 ++ | 130000 ++ O | | | 120000 ++ O O O O | 110000 ++ O O O O O O O O O | O O O O O O O O O | 100000 ++-----------------------------------------------------------------+ fsmark.files_per_sec 446 ++--------------------------------------------------------------------+ 444 ++ *.. | | .. | 442 *+.*..*..*..*..*..*.. *..*..*..*...*..*..*..*..*..*..* *..*..*..* 440 ++ .. | 438 ++ * | 436 ++ | | | 434 ++ | 432 ++ | 430 ++ | 428 ++ O O O O O O O O O O O O O O O O O O | | | 426 O+ O O O O | 424 ++--------------------------------------------------------------------+ fsmark.time.elapsed_time 193 ++--------------------------------------------------------------------+ O O O | 192 ++ O O O O O O O O O O | 191 ++ O O O O O O O O | | O O | 190 ++ | 189 ++ | | | 188 ++ | 187 ++ | | .*.. | 186 *+. .*..*..*..*..*. *.. .*.. ..*.. .*.. .*.. .*.. *..*..*..* 185 ++ *. *. *. *. *. *. .. | | * | 184 ++--------------------------------------------------------------------+ fsmark.time.elapsed_time.max 193 ++--------------------------------------------------------------------+ O O O | 192 ++ O O O O O O O O O O | 191 ++ O O O O O O O O | | O O | 190 ++ | 189 ++ | | | 188 ++ | 187 ++ | | .*.. | 186 *+. .*..*..*..*..*. *.. .*.. ..*.. .*.. .*.. .*.. *..*..*..* 185 ++ *. *. *. *. *. *. .. | | * | 184 ++--------------------------------------------------------------------+ fsmark.time.involuntary_context_switches 98500 ++------------------------------------------------------------------+ 98000 ++ .* | | *.. .*. + *.. .*..*..* * 97500 ++. *..*..*. + .. *..*.*..*..*..*..*..*..*. + +| 97000 *+ *..* + + | | * | 96500 ++ | 96000 ++ | 95500 ++ | | | 95000 ++ O O O O O O O | 94500 ++ O O O O O O O O O | O O O O O | 94000 ++ O O | 93500 ++------------------------------------------------------------------+ vmstat.system.in 3100 ++-------------------------------------------------------------------+ 3050 *+. .*.. .*.. .*..*.. .*..*.. | | .*..*.. .*..*. *. *. *.. .*..*. .*..*..*..* 3000 ++ *. *. *. *. | 2950 ++ | 2900 ++ | 2850 ++ | | | 2800 ++ | 2750 ++ | 2700 ++ | 2650 ++ O O | | O O O O O O O O O O O | 2600 O+ O O O O O O O O O | 2550 ++-------------------------------------------------------------------+ numa-vmstat.node0.nr_slab_unreclaimable 12000 ++------------------------------------------------------------------+ | O | 11000 ++ O O O O O O | O O O O O | | O O O O O O O O | 10000 ++ O O O | | | 9000 ++ | | | 8000 ++ *.. | | .*..*.. + .*.. .* | .*.. .*.. .*.*. + *..*..*. *..*. | 7000 *+.*. .*. .*..*. * | | *..*. *..*. | 6000 ++------------------------------------------------------------------+ numa-vmstat.node0.nr_dirtied 58000 ++------------------------------------------------------------------+ 57000 ++ O | O O O O | 56000 ++ O O O O O O | 55000 ++ O O O O O O O O O O | | O O | 54000 ++ | 53000 ++ | 52000 ++ | | *.. * | 51000 +++ .*.. .*.. .. + .*.. *..| 50000 ++ *. *..*..*.. .* * + .*.. *. *.. .. * * *.. .*..*. *. .. * | 49000 ++ *. * | 48000 ++------------------------------------------------------------------+ numa-vmstat.node0.nr_written 58000 ++------------------------------------------------------------------+ 57000 ++ O | O O O O O | 56000 ++ O O O O O O | 55000 ++ O O O O O O O O O | | O O | 54000 ++ | 53000 ++ | 52000 ++ | | * | 51000 ++ *.. .*.. .*.. .. + * .*.. *..| 50000 ++. *. *..*..*.. .* * + .. + *. *.. .. * * *.. .*..*. * + .. * | 49000 ++ *. * | 48000 ++------------------------------------------------------------------+ numa-meminfo.node0.SUnreclaim 50000 ++------------------------------------------------------------------+ | | | O | 45000 ++ O O O O O O | O O O O O | | O O O O O O O O | 40000 ++ O O O | | | 35000 ++ | | | | *.. *.. | 30000 ++ .. *.. .. *.. .*.. .* *..*..*.. .*.. .*..*.* * *..*. *..*. | | *..*..*. .*. | 25000 ++-------------------*--*-------------------------------------------+ proc-vmstat.nr_slab_unreclaimable 17500 ++-O--O-----O--O-------------------O--O--------O-----O-----O-----O--+ 17000 O+ O O O O O O O O O O O O | | | 16500 ++ | 16000 ++ | | | 15500 ++ | 15000 ++ | 14500 ++ | | | 14000 ++ | 13500 ++ | | | 13000 *+.*..*..*..*..*..*..*..*..*..*..*.*..*..*..*..*..*..*..*..*..*..*..* 12500 ++------------------------------------------------------------------+ meminfo.Slab 185000 ++-----------------------------------------------------------------+ | O O O O O | 180000 O+ O O O O O O O O O O O O O O O O O | | | | | 175000 ++ | | | 170000 ++ | | | 165000 ++ | | | | | 160000 *+.*..*..*..*..*.*..*.. .*..*..*..*..*.. .*.. .*..*..*..*..*..* | *..*. *. * | 155000 ++-----------------------------------------------------------------+ meminfo.SUnreclaim 70000 ++-O--O-----O--O-------------------O--O--------O-----O-----O-----O--+ 68000 O+ O O O O O O O O O O O O | | | 66000 ++ | 64000 ++ | | | 62000 ++ | 60000 ++ | 58000 ++ | | | 56000 ++ | 54000 ++ | | | 52000 *+.*..*..*..*..*..*..*..*..*..*..*.*..*..*..*..*..*..*..*..*..*..*..* 50000 ++------------------------------------------------------------------+ slabinfo.kmalloc-256.active_objs 25000 ++------------------------------------------------------------------+ | O O O O O O O O O O O O O O O O O O O O O | O O | 20000 ++ | | | | | 15000 ++ | | | 10000 ++ | | | | | 5000 ++ | *..*..*..*..*..*..*..*..*..*..*..*.*..*..*..*..*..*..*..*..*..*..*..* | | 0 ++------------------------------------------------------------------+ slabinfo.kmalloc-256.num_objs 24000 ++-O--O-----O--O--------O-----O--O-O--O--O--O--O--O--O-----O-----O--+ 22000 O+ O O O O O O | | | 20000 ++ | 18000 ++ | | | 16000 ++ | 14000 ++ | 12000 ++ | | | 10000 ++ | 8000 ++ | | | 6000 ++ .*.. .*..*..*.. .*.. .*..| 4000 *+-*--*--*--*--*--*--*--*--*--*--*-*-----*-----------*--*-----*-----* slabinfo.kmalloc-256.active_slabs 800 ++--------------------------------------------------------------------+ O O O O O O O O O O O O O O O O O O O | 700 ++ O O O O | | | 600 ++ | | | 500 ++ | | | 400 ++ | | | 300 ++ | | | 200 ++ | *..*..*..*..*..*..*..*..*..*..*..*...*..*..*..*..*..*..*..*..*..*..*..* 100 ++--------------------------------------------------------------------+ slabinfo.kmalloc-256.num_slabs 800 ++--------------------------------------------------------------------+ O O O O O O O O O O O O O O O O O O O | 700 ++ O O O O | | | 600 ++ | | | 500 ++ | | | 400 ++ | | | 300 ++ | | | 200 ++ | *..*..*..*..*..*..*..*..*..*..*..*...*..*..*..*..*..*..*..*..*..*..*..* 100 ++--------------------------------------------------------------------+ slabinfo.kmalloc-192.active_objs 70000 ++------------------------------------------------------------------+ O O O O O O O O O O O O O O O O O O O O O O O | 60000 ++ | | | 50000 ++ | | | 40000 ++ | | | 30000 ++ | | | 20000 ++ | | | 10000 *+. .*.. .*..| | *..*..*..*..*..*..*..*..*..*..*.*..*..*..*..*. *..*..*..*. * 0 ++------------------------------------------------------------------+ slabinfo.kmalloc-192.num_objs 70000 ++------------------------------------------------------------------+ O O O O O O O O O O O O O O O O O O O O O O O | 60000 ++ | | | 50000 ++ | | | 40000 ++ | | | 30000 ++ | | | 20000 ++ | | | 10000 *+. .*.. .*..*..*. .*.. .*.. .*.. .*.. .*..* | *. *..*..*..*..*..*. *. *. *. *..*. *. | 0 ++------------------------------------------------------------------+ slabinfo.kmalloc-192.active_slabs 1600 O+-O--O--O--O--O--O--O--O--O--O--O--O--O--O--O--O--O--O--O--O--O--O--+ | | 1400 ++ | 1200 ++ | | | 1000 ++ | | | 800 ++ | | | 600 ++ | 400 ++ | | | 200 *+.*..*..*..*..*..*..*..*..*..*..*..*..*..*..*..*..*..*..*..*..*..*..* | | 0 ++-------------------------------------------------------------------+ slabinfo.kmalloc-192.num_slabs 1600 O+-O--O--O--O--O--O--O--O--O--O--O--O--O--O--O--O--O--O--O--O--O--O--+ | | 1400 ++ | 1200 ++ | | | 1000 ++ | | | 800 ++ | | | 600 ++ | 400 ++ | | | 200 *+.*..*..*..*..*..*..*..*..*..*..*..*..*..*..*..*..*..*..*..*..*..*..* | | 0 ++-------------------------------------------------------------------+ slabinfo.kmalloc-32.active_objs 55000 ++------------------------------------------------------------------+ | O | 50000 O+ O O O | 45000 ++ | | | 40000 ++ | | | 35000 ++ | | O O O O O O O O O | 30000 ++ O O O O O O O O O | 25000 ++ | | | 20000 ++ | *..*..*..*.. .*..*..*..*..*..*..*.*..*..*..*.. .*..*..*..*..*..*..* 15000 ++----------*----------------------------------*--------------------+ slabinfo.kmalloc-32.num_objs 55000 ++------------------------------------------------------------------+ O O O O | 50000 ++ O | 45000 ++ | | | 40000 ++ | | | 35000 ++ | | O O O O O O O O O O O O O O O O | 30000 ++ O O | 25000 ++ | | | 20000 ++ | *..*..*..*.. .*..*..*..*..*..*..*.*..*..*..*.. .*..*..*..*..*..*..* 15000 ++----------*----------------------------------*--------------------+ slabinfo.kmalloc-32.active_slabs 400 O+-O--O--O--O---------------------------------------------------------+ | | 350 ++ | | | | | 300 ++ | | | 250 ++ O O O O O O O O O O O O O O O O | | O O | 200 ++ | | | | | 150 *+.*..*..*.. .*..*..*.. .*..*..*...*..*.. .*.. .*..*.. .*..*..*..* | *. *. *. *. *. | 100 ++--------------------------------------------------------------------+ slabinfo.kmalloc-32.num_slabs 400 O+-O--O--O--O---------------------------------------------------------+ | | 350 ++ | | | | | 300 ++ | | | 250 ++ O O O O O O O O O O O O O O O O | | O O | 200 ++ | | | | | 150 *+.*..*..*.. .*..*..*.. .*..*..*...*..*.. .*.. .*..*.. .*..*..*..* | *. *. *. *. *. | 100 ++--------------------------------------------------------------------+ kmsg.usb_usb7:can_t_set_config___error 1 ++--------------------------------------------------------------------* | | | :| 0.8 ++ :| | :| | :| 0.6 ++ : | | : | 0.4 ++ : | | : | | : | 0.2 ++ : | | : | | : | 0 *+--*---*---*---*----*---*---*---*---*---*---*---*----*---*---*---*---+ [*] bisect-good sample [O] bisect-bad sample To reproduce: git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git cd lkp-tests bin/lkp install job.yaml # job file is attached in this email bin/lkp run job.yaml Disclaimer: Results have been estimated based on internal Intel analysis and are provided for informational purposes only. Any difference in system hardware or software design or configuration may affect actual performance. Thanks, Ying Huang View attachment "job.yaml" of type "text/plain" (3753 bytes) View attachment "reproduce" of type "text/plain" (1945 bytes)
Powered by blists - more mailing lists