lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <202512181208.753b9f6e-lkp@intel.com>
Date: Thu, 18 Dec 2025 12:59:53 +0800
From: kernel test robot <oliver.sang@...el.com>
To: Peter Zijlstra <peterz@...radead.org>
CC: <oe-lkp@...ts.linux.dev>, <lkp@...el.com>, <linux-kernel@...r.kernel.org>,
	<x86@...nel.org>, Ingo Molnar <mingo@...nel.org>, Linus Torvalds
	<torvalds@...ux-foundation.org>, Dietmar Eggemann <dietmar.eggemann@....com>,
	Juri Lelli <juri.lelli@...hat.com>, Mel Gorman <mgorman@...e.de>, "Shrikanth
 Hegde" <sshegde@...ux.ibm.com>, Valentin Schneider <vschneid@...hat.com>,
	Vincent Guittot <vincent.guittot@...aro.org>, <aubrey.li@...ux.intel.com>,
	<yu.c.chen@...el.com>, <oliver.sang@...el.com>
Subject: [tip:sched/core] [sched/fair]  089d84203a:
 pts.schbench.32.usec,_99.9th_latency_percentile 52.4% regression



Hello,

kernel test robot noticed a 52.4% regression of pts.schbench.32.usec,_99.9th_latency_percentile on:


commit: 089d84203ad42bc8fd6dbf41683e162ac6e848cd ("sched/fair: Fold the sched_avg update")
https://git.kernel.org/cgit/linux/kernel/git/tip/tip.git sched/core


testcase: pts
config: x86_64-rhel-9.4
compiler: gcc-14
test machine: 96 threads 2 sockets Intel(R) Xeon(R) Gold 6252 CPU @ 2.10GHz (Cascade Lake) with 512G memory
parameters:

	test: schbench-1.1.0
	option_a: 32
	option_b: 2
	cpufreq_governor: performance


In addition to that, the commit also has significant impact on the following tests:

+------------------+-----------------------------------------------------------------------------------------------+
| testcase: change | pts: pts.stress-ng.Semaphores.bogo_ops_s 17.0% improvement                                    |
| test machine     | 96 threads 2 sockets Intel(R) Xeon(R) Gold 6252 CPU @ 2.10GHz (Cascade Lake) with 512G memory |
| test parameters  | cpufreq_governor=performance                                                                  |
|                  | option_a=Semaphores                                                                           |
|                  | test=stress-ng-1.11.0                                                                         |
+------------------+-----------------------------------------------------------------------------------------------+


If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <oliver.sang@...el.com>
| Closes: https://lore.kernel.org/oe-lkp/202512181208.753b9f6e-lkp@intel.com


Details are as below:
-------------------------------------------------------------------------------------------------->


The kernel config and materials to reproduce are available at:
https://download.01.org/0day-ci/archive/20251218/202512181208.753b9f6e-lkp@intel.com

=========================================================================================
compiler/cpufreq_governor/kconfig/option_a/option_b/rootfs/tbox_group/test/testcase:
  gcc-14/performance/x86_64-rhel-9.4/32/2/debian-12-x86_64-phoronix/lkp-csl-2sp7/schbench-1.1.0/pts

commit: 
  38a68b982d ("<linux/compiler_types.h>: Add the __signed_scalar_typeof() helper")
  089d84203a ("sched/fair: Fold the sched_avg update")

38a68b982dd0b10e 089d84203ad42bc8fd6dbf41683 
---------------- --------------------------- 
         %stddev     %change         %stddev
             \          |                \  
     22925            -8.7%      20922        vmstat.system.cs
      9350 ±  3%     +66.2%      15536        pts.schbench.32.usec,_50.0th_latency_percentile
     15508 ±  2%     +68.7%      26163        pts.schbench.32.usec,_75.0th_latency_percentile
     21587 ±  2%     +66.3%      35891        pts.schbench.32.usec,_90.0th_latency_percentile
     45209 ±  8%     +52.4%      68896 ±  2%  pts.schbench.32.usec,_99.9th_latency_percentile
    836593           -13.5%     723826        pts.time.involuntary_context_switches
    263006            -5.9%     247368        pts.time.voluntary_context_switches
     19.44 ±  9%      -1.9       17.56 ± 10%  perf-stat.i.cache-miss-rate%
     23851           -10.1%      21445        perf-stat.i.context-switches
      2482 ±  3%     +55.0%       3848        perf-stat.i.cpu-migrations
     58297 ±  6%     +22.2%      71251 ±  5%  perf-stat.i.cycles-between-cache-misses
      4.28            +1.9%       4.37        perf-stat.i.major-faults
    459621 ±  4%     -10.2%     412737 ±  6%  perf-stat.i.node-load-misses
     59.21 ±  2%      -3.7       55.48 ±  3%  perf-stat.i.node-store-miss-rate%
     23548            -9.1%      21411        perf-stat.ps.context-switches
      2453 ±  3%     +57.1%       3853        perf-stat.ps.cpu-migrations
    451933 ±  4%     -10.2%     405766 ±  6%  perf-stat.ps.node-load-misses
      3.80 ±101%      -2.4        1.37 ±158%  perf-profile.calltrace.cycles-pp.shmem_alloc_and_add_folio.shmem_get_folio_gfp.shmem_write_begin.generic_perform_write.shmem_file_write_iter
      3.80 ±101%      -2.4        1.37 ±158%  perf-profile.calltrace.cycles-pp.shmem_get_folio_gfp.shmem_write_begin.generic_perform_write.shmem_file_write_iter.vfs_write
      3.80 ±101%      -2.4        1.37 ±158%  perf-profile.calltrace.cycles-pp.shmem_write_begin.generic_perform_write.shmem_file_write_iter.vfs_write.ksys_write
      5.04 ± 89%      -2.2        2.88 ±136%  perf-profile.calltrace.cycles-pp.__x64_sys_exit_group.x64_sys_call.do_syscall_64.entry_SYSCALL_64_after_hwframe
      5.04 ± 89%      -2.2        2.88 ±136%  perf-profile.calltrace.cycles-pp.do_exit.do_group_exit.__x64_sys_exit_group.x64_sys_call.do_syscall_64
      5.04 ± 89%      -2.2        2.88 ±136%  perf-profile.calltrace.cycles-pp.do_group_exit.__x64_sys_exit_group.x64_sys_call.do_syscall_64.entry_SYSCALL_64_after_hwframe
      5.04 ± 89%      -2.2        2.88 ±136%  perf-profile.calltrace.cycles-pp.x64_sys_call.do_syscall_64.entry_SYSCALL_64_after_hwframe
      3.80 ±101%      -2.4        1.37 ±158%  perf-profile.children.cycles-pp.shmem_alloc_and_add_folio
      3.80 ±101%      -2.4        1.37 ±158%  perf-profile.children.cycles-pp.shmem_get_folio_gfp
      3.80 ±101%      -2.4        1.37 ±158%  perf-profile.children.cycles-pp.shmem_write_begin
      5.04 ± 89%      -2.2        2.88 ±136%  perf-profile.children.cycles-pp.__x64_sys_exit_group
      5.04 ± 89%      -2.2        2.88 ±136%  perf-profile.children.cycles-pp.x64_sys_call
      3398 ± 14%     +45.6%       4947 ± 16%  sched_debug.cfs_rq:/.avg_vruntime.stddev
     10.87 ± 38%    +268.9%      40.11 ± 40%  sched_debug.cfs_rq:/.load_avg.avg
    195.51 ±  8%     -32.1%     132.77 ± 19%  sched_debug.cfs_rq:/.runnable_avg.avg
    195.49 ±  8%     -32.1%     132.67 ± 19%  sched_debug.cfs_rq:/.util_avg.avg
     16931 ± 18%     +59.7%      27048 ± 19%  sched_debug.cfs_rq:/.zero_vruntime.max
      2742 ± 11%     +48.8%       4080 ±  9%  sched_debug.cfs_rq:/.zero_vruntime.stddev
      9.71 ± 10%   +1542.6%     159.44 ± 41%  sched_debug.cfs_rq:/system.slice.load_avg.avg
     49.20 ± 29%   +1273.6%     675.80 ± 24%  sched_debug.cfs_rq:/system.slice.load_avg.max
     10.96 ±  9%   +1016.0%     122.30 ± 26%  sched_debug.cfs_rq:/system.slice.load_avg.stddev
    230.81 ±  9%     -33.2%     154.14 ± 20%  sched_debug.cfs_rq:/system.slice.runnable_avg.avg
      8.42 ± 49%     -55.9%       3.71 ± 14%  sched_debug.cfs_rq:/system.slice.se->avg.load_avg.avg
    180.50 ± 57%     -79.6%      36.90 ± 46%  sched_debug.cfs_rq:/system.slice.se->avg.load_avg.max
     22.27 ± 48%     -71.5%       6.34 ± 22%  sched_debug.cfs_rq:/system.slice.se->avg.load_avg.stddev
    230.83 ±  9%     -33.3%     153.85 ± 20%  sched_debug.cfs_rq:/system.slice.se->avg.runnable_avg.avg
    230.83 ±  9%     -33.4%     153.79 ± 20%  sched_debug.cfs_rq:/system.slice.se->avg.util_avg.avg
      3241 ± 31%     -81.3%     607.00 ± 25%  sched_debug.cfs_rq:/system.slice.se->load.weight.min
      1893 ± 40%    +627.5%      13772 ± 35%  sched_debug.cfs_rq:/system.slice.tg_load_avg.avg
      2803 ± 30%    +556.7%      18406 ± 25%  sched_debug.cfs_rq:/system.slice.tg_load_avg.max
      1715 ± 44%    +604.5%      12086 ± 37%  sched_debug.cfs_rq:/system.slice.tg_load_avg.min
    231.68 ± 59%    +643.9%       1723 ± 29%  sched_debug.cfs_rq:/system.slice.tg_load_avg.stddev
     22.27 ± 41%    +641.4%     165.13 ± 38%  sched_debug.cfs_rq:/system.slice.tg_load_avg_contrib.avg
    230.81 ±  9%     -33.2%     154.07 ± 20%  sched_debug.cfs_rq:/system.slice.util_avg.avg
      0.03 ± 33%     -51.4%       0.02 ± 24%  perf-sched.sch_delay.avg.ms.exit_to_user_mode_loop.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown].[unknown]
      0.00 ± 31%    +569.0%       0.03 ± 27%  perf-sched.sch_delay.avg.ms.futex_do_wait.__futex_wait.futex_wait.do_futex.__x64_sys_futex
      0.24 ± 18%     +43.3%       0.34 ±  7%  perf-sched.sch_delay.avg.ms.irqentry_exit.asm_sysvec_apic_timer_interrupt.[unknown]
      0.24 ± 17%     +40.8%       0.34 ±  5%  perf-sched.sch_delay.avg.ms.irqentry_exit.asm_sysvec_apic_timer_interrupt.[unknown].[unknown]
      0.01 ± 13%    +313.3%       0.04 ± 25%  perf-sched.sch_delay.avg.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.do_epoll_pwait.part
     25.39 ± 24%    +103.1%      51.58 ± 35%  perf-sched.sch_delay.max.ms.futex_do_wait.__futex_wait.futex_wait.do_futex.__x64_sys_futex
     42.11 ± 55%    +115.8%      90.86 ± 40%  perf-sched.sch_delay.max.ms.irqentry_exit.asm_sysvec_apic_timer_interrupt.[unknown]
     36.90 ±  8%    +134.0%      86.33 ± 35%  perf-sched.sch_delay.max.ms.irqentry_exit.asm_sysvec_apic_timer_interrupt.[unknown].[unknown]
     29.57 ± 17%     +94.8%      57.62 ± 46%  perf-sched.sch_delay.max.ms.irqentry_exit.asm_sysvec_reschedule_ipi.[unknown]
     27.31 ± 18%     +98.5%      54.21 ± 49%  perf-sched.sch_delay.max.ms.irqentry_exit.asm_sysvec_reschedule_ipi.[unknown].[unknown]
      1.28 ± 54%    +617.0%       9.21 ± 82%  perf-sched.sch_delay.max.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.do_epoll_pwait.part
      0.15 ± 15%     +30.9%       0.20 ±  5%  perf-sched.total_sch_delay.average.ms
     24.08 ±  3%     +12.1%      27.00        perf-sched.total_wait_and_delay.average.ms
     23.93 ±  3%     +12.0%      26.80        perf-sched.total_wait_time.average.ms
     12.02 ±  8%    -100.0%       0.00        perf-sched.wait_and_delay.avg.ms.exit_to_user_mode_loop.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown]
      8.86 ±  9%     -33.9%       5.86 ±  3%  perf-sched.wait_and_delay.avg.ms.exit_to_user_mode_loop.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown].[unknown]
      9.27 ±  6%     +37.9%      12.79 ±  5%  perf-sched.wait_and_delay.avg.ms.futex_do_wait.__futex_wait.futex_wait.do_futex.__x64_sys_futex
     23.08 ±  2%     +19.1%      27.49 ±  2%  perf-sched.wait_and_delay.avg.ms.irqentry_exit.asm_sysvec_apic_timer_interrupt.[unknown]
     23.25 ±  2%     +18.7%      27.60        perf-sched.wait_and_delay.avg.ms.irqentry_exit.asm_sysvec_apic_timer_interrupt.[unknown].[unknown]
     24.82 ± 11%    -100.0%       0.00        perf-sched.wait_and_delay.avg.ms.irqentry_exit.asm_sysvec_call_function_single.[unknown]
     24.47 ± 12%    -100.0%       0.00        perf-sched.wait_and_delay.avg.ms.irqentry_exit.asm_sysvec_call_function_single.[unknown].[unknown]
     27.22 ±  7%     +14.8%      31.24 ±  2%  perf-sched.wait_and_delay.avg.ms.irqentry_exit.asm_sysvec_reschedule_ipi.[unknown]
     27.68 ±  7%     +14.5%      31.70 ±  2%  perf-sched.wait_and_delay.avg.ms.irqentry_exit.asm_sysvec_reschedule_ipi.[unknown].[unknown]
    561.21           +15.2%     646.76        perf-sched.wait_and_delay.avg.ms.smpboot_thread_fn.kthread.ret_from_fork.ret_from_fork_asm
      1151 ±  3%    -100.0%       0.00        perf-sched.wait_and_delay.count.exit_to_user_mode_loop.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown]
      9330 ± 12%     +85.2%      17277 ±  3%  perf-sched.wait_and_delay.count.exit_to_user_mode_loop.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown].[unknown]
     49629 ±  5%     -15.4%      41983 ±  3%  perf-sched.wait_and_delay.count.irqentry_exit.asm_sysvec_apic_timer_interrupt.[unknown]
     48187 ±  5%     -15.4%      40747 ±  3%  perf-sched.wait_and_delay.count.irqentry_exit.asm_sysvec_apic_timer_interrupt.[unknown].[unknown]
      4019 ±  8%    -100.0%       0.00        perf-sched.wait_and_delay.count.irqentry_exit.asm_sysvec_call_function_single.[unknown]
      3792 ±  7%    -100.0%       0.00        perf-sched.wait_and_delay.count.irqentry_exit.asm_sysvec_call_function_single.[unknown].[unknown]
     10992 ±  4%      -9.9%       9903 ±  2%  perf-sched.wait_and_delay.count.irqentry_exit.asm_sysvec_reschedule_ipi.[unknown]
    183.81 ±149%    -100.0%       0.00        perf-sched.wait_and_delay.max.ms.exit_to_user_mode_loop.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown]
    150.43 ±  9%     +39.7%     210.17 ±  4%  perf-sched.wait_and_delay.max.ms.irqentry_exit.asm_sysvec_apic_timer_interrupt.[unknown]
    138.97 ±  7%     +48.9%     206.98 ±  6%  perf-sched.wait_and_delay.max.ms.irqentry_exit.asm_sysvec_apic_timer_interrupt.[unknown].[unknown]
    120.74 ±  6%    -100.0%       0.00        perf-sched.wait_and_delay.max.ms.irqentry_exit.asm_sysvec_call_function_single.[unknown]
    117.76 ±  7%    -100.0%       0.00        perf-sched.wait_and_delay.max.ms.irqentry_exit.asm_sysvec_call_function_single.[unknown].[unknown]
      1.04 ± 31%     -71.6%       0.29 ± 44%  perf-sched.wait_time.avg.ms.__cond_resched.dput.part.0.__fput
      8.83 ±  8%     -33.9%       5.84 ±  3%  perf-sched.wait_time.avg.ms.exit_to_user_mode_loop.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown].[unknown]
      9.27 ±  6%     +37.7%      12.76 ±  5%  perf-sched.wait_time.avg.ms.futex_do_wait.__futex_wait.futex_wait.do_futex.__x64_sys_futex
     22.84 ±  2%     +18.9%      27.15 ±  2%  perf-sched.wait_time.avg.ms.irqentry_exit.asm_sysvec_apic_timer_interrupt.[unknown]
     23.01           +18.5%      27.26        perf-sched.wait_time.avg.ms.irqentry_exit.asm_sysvec_apic_timer_interrupt.[unknown].[unknown]
     27.09 ±  7%     +14.7%      31.07 ±  2%  perf-sched.wait_time.avg.ms.irqentry_exit.asm_sysvec_reschedule_ipi.[unknown]
     27.55 ±  7%     +14.5%      31.53 ±  2%  perf-sched.wait_time.avg.ms.irqentry_exit.asm_sysvec_reschedule_ipi.[unknown].[unknown]
    561.20           +15.2%     646.75        perf-sched.wait_time.avg.ms.smpboot_thread_fn.kthread.ret_from_fork.ret_from_fork_asm
    131.24 ±  9%     +37.8%     180.91 ±  6%  perf-sched.wait_time.max.ms.irqentry_exit.asm_sysvec_apic_timer_interrupt.[unknown]
    128.37 ±  7%     +41.2%     181.24 ±  7%  perf-sched.wait_time.max.ms.irqentry_exit.asm_sysvec_apic_timer_interrupt.[unknown].[unknown]
    115.53 ±  7%     +17.9%     136.21 ±  7%  perf-sched.wait_time.max.ms.irqentry_exit.asm_sysvec_call_function_single.[unknown]
    112.57 ±  5%     +47.8%     166.40 ± 67%  perf-sched.wait_time.max.ms.irqentry_exit.asm_sysvec_call_function_single.[unknown].[unknown]
      7.69 ± 33%   +6561.5%     512.09 ±234%  perf-sched.wait_time.max.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.do_epoll_pwait.part


***************************************************************************************************
lkp-csl-2sp7: 96 threads 2 sockets Intel(R) Xeon(R) Gold 6252 CPU @ 2.10GHz (Cascade Lake) with 512G memory
=========================================================================================
compiler/cpufreq_governor/kconfig/option_a/rootfs/tbox_group/test/testcase:
  gcc-14/performance/x86_64-rhel-9.4/Semaphores/debian-12-x86_64-phoronix/lkp-csl-2sp7/stress-ng-1.11.0/pts

commit: 
  38a68b982d ("<linux/compiler_types.h>: Add the __signed_scalar_typeof() helper")
  089d84203a ("sched/fair: Fold the sched_avg update")

38a68b982dd0b10e 089d84203ad42bc8fd6dbf41683 
---------------- --------------------------- 
         %stddev     %change         %stddev
             \          |                \  
      0.94 ± 12%      +0.1        1.05 ± 11%  mpstat.cpu.all.irq%
  22175775 ±  2%     -29.7%   15587298 ±  2%  vmstat.system.cs
  43933492           +17.0%   51381109        pts.stress-ng.Semaphores.bogo_ops_s
 1.201e+09           -29.9%  8.415e+08 ±  2%  pts.time.involuntary_context_switches
      2047            -2.8%       1990        pts.time.system_time
    774.22            +6.9%     827.70        pts.time.user_time
     18302 ± 19%    +182.0%      51617 ± 19%  pts.time.voluntary_context_switches
 1.385e+08            +9.9%  1.522e+08        perf-stat.i.branch-misses
  23793135 ±  2%     -29.7%   16716565 ±  2%  perf-stat.i.context-switches
    165.66 ±  4%     +49.0%     246.91        perf-stat.i.cpu-migrations
 2.678e+10            -3.7%   2.58e+10        perf-stat.i.dTLB-loads
 1.557e+10            -3.4%  1.504e+10        perf-stat.i.dTLB-stores
  21025195           +24.2%   26111221 ±  2%  perf-stat.i.iTLB-load-misses
 1.017e+11            -2.9%  9.874e+10        perf-stat.i.instructions
     32500 ± 30%     -51.7%      15693 ± 10%  perf-stat.i.instructions-per-iTLB-miss
    686.98            -2.6%     668.88        perf-stat.i.metric.M/sec
      0.59            +0.1        0.65        perf-stat.overall.branch-miss-rate%
      1.50            +3.9%       1.56        perf-stat.overall.cpi
      4831 ±  2%     -21.8%       3780 ±  2%  perf-stat.overall.instructions-per-iTLB-miss
      0.67            -3.8%       0.64        perf-stat.overall.ipc
 1.358e+08            +9.9%  1.493e+08        perf-stat.ps.branch-misses
  23332701 ±  2%     -29.7%   16391899 ±  2%  perf-stat.ps.context-switches
    162.46 ±  4%     +49.1%     242.15        perf-stat.ps.cpu-migrations
 2.627e+10            -3.7%   2.53e+10        perf-stat.ps.dTLB-loads
 1.527e+10            -3.4%  1.475e+10        perf-stat.ps.dTLB-stores
  20659015           +24.1%   25643823 ±  2%  perf-stat.ps.iTLB-load-misses
 9.977e+10            -2.9%  9.686e+10        perf-stat.ps.instructions
 5.136e+12            -3.2%  4.973e+12        perf-stat.total.instructions
      6.19 ±141%    +614.4%      44.23 ± 67%  perf-sched.sch_delay.avg.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.x64_sys_call
      3.01 ±222%    +936.1%      31.16 ± 92%  perf-sched.sch_delay.avg.ms.do_wait.kernel_wait4.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown]
      4.75 ± 67%     -89.4%       0.50 ± 88%  perf-sched.sch_delay.avg.ms.futex_do_wait.__futex_wait.futex_wait.do_futex.__x64_sys_futex
      0.01 ± 14%     +25.7%       0.01 ± 19%  perf-sched.sch_delay.avg.ms.irq_wait_for_interrupt.irq_thread.kthread.ret_from_fork.ret_from_fork_asm
     27.62 ±134%     -97.8%       0.60 ±219%  perf-sched.sch_delay.avg.ms.rcu_gp_kthread.kthread.ret_from_fork.ret_from_fork_asm
      4.19 ±217%    +514.4%      25.74 ±112%  perf-sched.sch_delay.avg.ms.schedule_timeout.__wait_for_common.wait_for_completion_state.kernel_clone.__x64_sys_vfork
    122.79 ± 13%    +312.2%     506.13 ± 32%  perf-sched.sch_delay.max.ms.__x64_sys_sched_yield.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown].[unknown]
      2181 ± 41%     -38.5%       1340 ± 35%  perf-sched.sch_delay.max.ms.anon_pipe_read.fifo_pipe_read.vfs_read.ksys_read.do_syscall_64
      2680 ± 41%     -62.5%       1004 ± 99%  perf-sched.sch_delay.max.ms.anon_pipe_read.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
    333.74 ±141%    +199.8%       1000        perf-sched.sch_delay.max.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.x64_sys_call
    167.85 ±223%    +598.7%       1172 ± 76%  perf-sched.sch_delay.max.ms.do_wait.kernel_wait4.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown]
    245.85 ±130%     -97.6%       5.86 ±222%  perf-sched.sch_delay.max.ms.rcu_gp_kthread.kthread.ret_from_fork.ret_from_fork_asm
      0.05 ±  5%     +52.0%       0.08 ±  2%  perf-sched.total_wait_and_delay.average.ms
  20881143 ±  3%     -32.1%   14170213 ±  2%  perf-sched.total_wait_and_delay.count.ms
      0.05 ±  4%     +52.8%       0.08 ±  3%  perf-sched.total_wait_time.average.ms
      0.02 ±  3%     +56.0%       0.04 ±  2%  perf-sched.wait_and_delay.avg.ms.__x64_sys_sched_yield.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown].[unknown]
      9.51 ±209%    +745.8%      80.40 ± 54%  perf-sched.wait_and_delay.avg.ms.do_wait.kernel_wait4.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown]
     30.35 ± 50%     -71.3%       8.70 ± 28%  perf-sched.wait_and_delay.avg.ms.futex_do_wait.__futex_wait.futex_wait.do_futex.__x64_sys_futex
  20874873 ±  3%     -32.2%   14163492 ±  2%  perf-sched.wait_and_delay.count.__x64_sys_sched_yield.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown].[unknown]
    165.00 ± 90%    +378.6%     789.67 ± 30%  perf-sched.wait_and_delay.count.exit_to_user_mode_loop.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown].[unknown]
    382.50 ± 28%     +91.8%     733.67 ± 12%  perf-sched.wait_and_delay.count.futex_do_wait.__futex_wait.futex_wait.do_futex.__x64_sys_futex
    503.33 ± 49%    +105.4%       1033 ± 24%  perf-sched.wait_and_delay.count.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.do_epoll_pwait.part
     85.17 ± 24%     +85.3%     157.83 ± 28%  perf-sched.wait_and_delay.count.schedule_timeout.rcu_gp_fqs_loop.rcu_gp_kthread.kthread.ret_from_fork
    514.37 ± 18%    +192.2%       1502 ± 24%  perf-sched.wait_and_delay.max.ms.__x64_sys_sched_yield.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown].[unknown]
      5701 ± 19%     -32.4%       3852 ± 23%  perf-sched.wait_and_delay.max.ms.anon_pipe_read.fifo_pipe_read.vfs_read.ksys_read.do_syscall_64
      5530 ± 34%     -45.5%       3015 ± 47%  perf-sched.wait_and_delay.max.ms.anon_pipe_read.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
      0.01 ± 18%    +230.2%       0.05 ± 86%  perf-sched.wait_time.avg.ms.__cond_resched.generic_perform_write.shmem_file_write_iter.vfs_write.ksys_write
      0.02 ±  3%     +55.3%       0.04 ±  2%  perf-sched.wait_time.avg.ms.__x64_sys_sched_yield.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown].[unknown]
      0.01 ± 18%     +56.3%       0.02 ± 26%  perf-sched.wait_time.avg.ms.exit_to_user_mode_loop.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown]
     25.61 ± 50%     -68.0%       8.20 ± 26%  perf-sched.wait_time.avg.ms.futex_do_wait.__futex_wait.futex_wait.do_futex.__x64_sys_futex
    489.21 ± 21%    +171.2%       1326 ± 28%  perf-sched.wait_time.max.ms.__x64_sys_sched_yield.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown].[unknown]
      1185 ± 26%     -15.6%       1000        perf-sched.wait_time.max.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.x64_sys_call
      0.02 ± 20%    +300.7%       0.10 ± 98%  perf-sched.wait_time.max.ms.exit_to_user_mode_loop.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown]
      1281 ± 10%     +37.3%       1759 ± 10%  sched_debug.cfs_rq:/.avg_vruntime.avg
      3255 ± 16%     +39.8%       4549 ±  9%  sched_debug.cfs_rq:/.avg_vruntime.stddev
     10.24 ± 50%    +266.8%      37.54 ± 44%  sched_debug.cfs_rq:/.load_avg.avg
      1086 ± 13%     +41.4%       1536 ±  7%  sched_debug.cfs_rq:/.zero_vruntime.avg
     16641 ± 20%     +71.5%      28532 ± 16%  sched_debug.cfs_rq:/.zero_vruntime.max
      2628 ±  7%     +55.5%       4088 ±  8%  sched_debug.cfs_rq:/.zero_vruntime.stddev
      9.57 ± 13%   +1560.4%     158.98 ± 43%  sched_debug.cfs_rq:/system.slice.load_avg.avg
     54.00 ±  7%   +1167.0%     684.17 ± 40%  sched_debug.cfs_rq:/system.slice.load_avg.max
     10.89 ± 10%   +1043.2%     124.50 ± 39%  sched_debug.cfs_rq:/system.slice.load_avg.stddev
      3621 ± 24%     -82.5%     633.83 ± 25%  sched_debug.cfs_rq:/system.slice.se->load.weight.min
      1382 ± 10%     +38.3%       1912 ± 11%  sched_debug.cfs_rq:/system.slice.se->vruntime.avg
      3453 ± 15%     +39.3%       4812 ± 10%  sched_debug.cfs_rq:/system.slice.se->vruntime.stddev
      1411 ± 54%    +818.0%      12960 ± 38%  sched_debug.cfs_rq:/system.slice.tg_load_avg.avg
      1983 ± 44%    +699.0%      15847 ± 40%  sched_debug.cfs_rq:/system.slice.tg_load_avg.max
      1244 ± 59%    +823.7%      11495 ± 38%  sched_debug.cfs_rq:/system.slice.tg_load_avg.min
     18.22 ± 45%    +791.4%     162.37 ± 41%  sched_debug.cfs_rq:/system.slice.tg_load_avg_contrib.avg
      0.39 ± 40%    +163.4%       1.02 ± 17%  sched_debug.cfs_rq:/system.slice/containerd.service.load_avg.avg
      1.83 ± 37%    +118.2%       4.00 ± 25%  sched_debug.cfs_rq:/system.slice/containerd.service.load_avg.max
      0.70 ± 33%    +111.6%       1.48 ± 17%  sched_debug.cfs_rq:/system.slice/containerd.service.load_avg.stddev
      0.44 ± 28%    +146.7%       1.09 ± 16%  sched_debug.cfs_rq:/system.slice/containerd.service.runnable_avg.avg
      2.00 ± 28%    +108.3%       4.17 ± 25%  sched_debug.cfs_rq:/system.slice/containerd.service.runnable_avg.max
      0.77 ± 23%    +105.5%       1.57 ± 19%  sched_debug.cfs_rq:/system.slice/containerd.service.runnable_avg.stddev
      0.29 ± 53%    +224.9%       0.94 ± 12%  sched_debug.cfs_rq:/system.slice/containerd.service.se->avg.load_avg.avg
      0.63 ± 54%    +119.4%       1.38 ± 28%  sched_debug.cfs_rq:/system.slice/containerd.service.se->avg.load_avg.stddev
      0.44 ± 28%    +135.1%       1.04 ± 18%  sched_debug.cfs_rq:/system.slice/containerd.service.se->avg.runnable_avg.avg
      2.00 ± 28%    +100.0%       4.00 ± 25%  sched_debug.cfs_rq:/system.slice/containerd.service.se->avg.runnable_avg.max
      0.77 ± 23%     +95.4%       1.50 ± 18%  sched_debug.cfs_rq:/system.slice/containerd.service.se->avg.runnable_avg.stddev
      0.44 ± 28%    +135.1%       1.04 ± 18%  sched_debug.cfs_rq:/system.slice/containerd.service.se->avg.util_avg.avg
      2.00 ± 28%    +100.0%       4.00 ± 25%  sched_debug.cfs_rq:/system.slice/containerd.service.se->avg.util_avg.max
      0.77 ± 23%     +95.4%       1.50 ± 18%  sched_debug.cfs_rq:/system.slice/containerd.service.se->avg.util_avg.stddev
      2.50 ± 47%    +133.5%       5.83 ± 28%  sched_debug.cfs_rq:/system.slice/containerd.service.tg_load_avg.avg
      3.00 ± 50%    +105.6%       6.17 ± 28%  sched_debug.cfs_rq:/system.slice/containerd.service.tg_load_avg.max
      2.17 ± 49%    +161.5%       5.67 ± 29%  sched_debug.cfs_rq:/system.slice/containerd.service.tg_load_avg.min
      0.39 ± 40%    +163.4%       1.02 ± 17%  sched_debug.cfs_rq:/system.slice/containerd.service.tg_load_avg_contrib.avg
      1.83 ± 37%    +118.2%       4.00 ± 25%  sched_debug.cfs_rq:/system.slice/containerd.service.tg_load_avg_contrib.max
      0.70 ± 33%    +111.6%       1.48 ± 17%  sched_debug.cfs_rq:/system.slice/containerd.service.tg_load_avg_contrib.stddev
      0.44 ± 28%    +146.7%       1.09 ± 16%  sched_debug.cfs_rq:/system.slice/containerd.service.util_avg.avg
      2.00 ± 28%    +108.3%       4.17 ± 25%  sched_debug.cfs_rq:/system.slice/containerd.service.util_avg.max
      0.77 ± 23%    +105.5%       1.57 ± 19%  sched_debug.cfs_rq:/system.slice/containerd.service.util_avg.stddev
      7.74 ±  3%     -39.7%       4.67 ± 35%  sched_debug.cpu.clock.stddev
      1495 ±  4%      +8.4%       1620 ±  5%  sched_debug.cpu.nr_switches.avg





Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.


-- 
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ