lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <202302122123.97c4a3d2-oliver.sang@intel.com>
Date:   Sun, 12 Feb 2023 21:56:03 +0800
From:   kernel test robot <oliver.sang@...el.com>
To:     Nathan Huckleberry <nhuck@...gle.com>
CC:     <oe-lkp@...ts.linux.dev>, <lkp@...el.com>,
        Sandeep Dhavale <dhavale@...gle.com>,
        Daeho Jeong <daehojeong@...gle.com>,
        Eric Biggers <ebiggers@...nel.org>,
        Sami Tolvanen <samitolvanen@...gle.com>,
        <linux-doc@...r.kernel.org>, <linux-kernel@...r.kernel.org>,
        <ying.huang@...el.com>, <feng.tang@...el.com>,
        <zhengjun.xing@...ux.intel.com>, <fengwei.yin@...el.com>,
        Nathan Huckleberry <nhuck@...gle.com>,
        Tejun Heo <tj@...nel.org>,
        Lai Jiangshan <jiangshanlai@...il.com>,
        Jonathan Corbet <corbet@....net>
Subject: Re: [PATCH] workqueue: Add WQ_SCHED_FIFO


Greeting,

FYI, we noticed a -8.2% regression of stress-ng.kill.ops_per_sec due to commit:


commit: 62640b40b22c774c8b73b9d9a8f285fea233f78c ("[PATCH] workqueue: Add WQ_SCHED_FIFO")
url: https://github.com/intel-lab-lkp/linux/commits/Nathan-Huckleberry/workqueue-Add-WQ_SCHED_FIFO/20230114-050854
base: https://git.kernel.org/cgit/linux/kernel/git/tj/wq.git for-next
patch link: https://lore.kernel.org/all/20230113210703.62107-1-nhuck@google.com/
patch subject: [PATCH] workqueue: Add WQ_SCHED_FIFO

in testcase: stress-ng
on test machine: 96 threads 2 sockets Intel(R) Xeon(R) Gold 6252 CPU @ 2.10GHz (Cascade Lake) with 512G memory
with following parameters:

	nr_threads: 10%
	disk: 1HDD
	testtime: 60s
	fs: ext4
	class: os
	test: kill
	cpufreq_governor: performance




If you fix the issue, kindly add following tag
| Reported-by: kernel test robot <oliver.sang@...el.com>
| Link: https://lore.kernel.org/oe-lkp/202302122123.97c4a3d2-oliver.sang@intel.com


Details are as below:
-------------------------------------------------------------------------------------------------->


To reproduce:

        git clone https://github.com/intel/lkp-tests.git
        cd lkp-tests
        sudo bin/lkp install job.yaml           # job file is attached in this email
        bin/lkp split-job --compatible job.yaml # generate the yaml file for lkp run
        sudo bin/lkp run generated-yaml-file

        # if come across any failure that blocks the test,
        # please remove ~/.lkp and /lkp dir to run from a clean state.

=========================================================================================
class/compiler/cpufreq_governor/disk/fs/kconfig/nr_threads/rootfs/tbox_group/test/testcase/testtime:
  os/gcc-11/performance/1HDD/ext4/x86_64-rhel-8.3/10%/debian-11.1-x86_64-20220510.cgz/lkp-csl-2sp7/kill/stress-ng/60s

commit: 
  c63a2e52d5 ("workqueue: Fold rebind_worker() within rebind_workers()")
  62640b40b2 ("workqueue: Add WQ_SCHED_FIFO")

c63a2e52d5e08f01 62640b40b22c774c8b73b9d9a8f 
---------------- --------------------------- 
         %stddev     %change         %stddev
             \          |                \  
      1579            -8.9%       1439        stress-ng.kill.kill_calls_per_sec
    284747            -8.2%     261516        stress-ng.kill.ops
      4745            -8.2%       4358        stress-ng.kill.ops_per_sec
    622870            -9.5%     563827        stress-ng.time.voluntary_context_switches
      0.69 ±  3%      -0.0        0.65 ±  3%  mpstat.cpu.all.irq%
     37785 ±  6%     -16.2%      31679 ± 14%  turbostat.C1
     17763            +8.7%      19302        proc-vmstat.nr_kernel_stack
    142968 ± 15%     -21.7%     111934 ±  9%  proc-vmstat.pgpgout
      2170 ± 16%     -21.1%       1712 ±  9%  vmstat.io.bo
     21321            -8.2%      19583        vmstat.system.cs
      4794 ±  7%     -27.2%       3491 ± 10%  sched_debug.cfs_rq:/.load_avg.max
    541.58 ±  8%     -17.5%     446.78 ± 10%  sched_debug.cfs_rq:/.load_avg.stddev
    514.00            +9.2%     561.20 ±  5%  sched_debug.cfs_rq:/.removed.runnable_avg.max
     82.50 ± 28%     +34.5%     110.96 ± 10%  sched_debug.cfs_rq:/.removed.runnable_avg.stddev
    514.00            +9.2%     561.20 ±  5%  sched_debug.cfs_rq:/.removed.util_avg.max
     82.50 ± 28%     +34.5%     110.96 ± 10%  sched_debug.cfs_rq:/.removed.util_avg.stddev
      0.93 ±  3%      -0.0        0.89 ±  2%  perf-stat.i.branch-miss-rate%
     21969            -8.7%      20060        perf-stat.i.context-switches
      0.03 ± 13%      -0.0        0.03 ±  6%  perf-stat.i.dTLB-load-miss-rate%
    296329 ± 12%     -17.7%     244025 ±  8%  perf-stat.i.dTLB-load-misses
      0.03 ± 13%      -0.0        0.03 ±  7%  perf-stat.overall.dTLB-load-miss-rate%
     21632            -8.7%      19751        perf-stat.ps.context-switches
    291040 ± 12%     -17.7%     239628 ±  8%  perf-stat.ps.dTLB-load-misses
     39.79 ± 13%      -6.6       33.16 ± 16%  perf-profile.calltrace.cycles-pp.secondary_startup_64_no_verify
     39.67 ± 13%      -6.6       33.08 ± 16%  perf-profile.calltrace.cycles-pp.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify
     39.67 ± 13%      -6.6       33.08 ± 16%  perf-profile.calltrace.cycles-pp.start_secondary.secondary_startup_64_no_verify
     39.66 ± 13%      -6.6       33.08 ± 16%  perf-profile.calltrace.cycles-pp.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify
     39.46 ± 13%      -6.6       32.89 ± 16%  perf-profile.calltrace.cycles-pp.cpuidle_idle_call.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify
     38.55 ± 13%      -6.5       32.08 ± 17%  perf-profile.calltrace.cycles-pp.cpuidle_enter.cpuidle_idle_call.do_idle.cpu_startup_entry.start_secondary
     38.35 ± 13%      -6.4       31.91 ± 17%  perf-profile.calltrace.cycles-pp.cpuidle_enter_state.cpuidle_enter.cpuidle_idle_call.do_idle.cpu_startup_entry
     39.79 ± 13%      -6.6       33.16 ± 16%  perf-profile.children.cycles-pp.secondary_startup_64_no_verify
     39.79 ± 13%      -6.6       33.16 ± 16%  perf-profile.children.cycles-pp.cpu_startup_entry
     39.79 ± 13%      -6.6       33.16 ± 16%  perf-profile.children.cycles-pp.do_idle
     39.58 ± 13%      -6.6       32.97 ± 16%  perf-profile.children.cycles-pp.cpuidle_idle_call
     39.67 ± 13%      -6.6       33.08 ± 16%  perf-profile.children.cycles-pp.start_secondary
     38.66 ± 13%      -6.5       32.15 ± 17%  perf-profile.children.cycles-pp.cpuidle_enter_state
     38.66 ± 13%      -6.5       32.15 ± 17%  perf-profile.children.cycles-pp.cpuidle_enter
      0.86 ± 37%      -0.3        0.54 ± 20%  perf-profile.children.cycles-pp.ktime_get
      0.68 ± 34%      -0.3        0.41 ± 15%  perf-profile.children.cycles-pp.clockevents_program_event
      0.12 ± 13%      -0.0        0.09 ± 15%  perf-profile.children.cycles-pp.exit_to_user_mode_prepare
      0.08 ± 12%      -0.0        0.05 ± 53%  perf-profile.children.cycles-pp.arch_do_signal_or_restart
      0.12 ± 11%      -0.0        0.09 ± 11%  perf-profile.children.cycles-pp.exit_to_user_mode_loop
      0.06            -0.0        0.03 ± 81%  perf-profile.children.cycles-pp.rcu_sched_clock_irq
      0.10 ± 11%      -0.0        0.07 ± 10%  perf-profile.children.cycles-pp.__schedule
      0.78 ± 39%      -0.3        0.48 ± 22%  perf-profile.self.cycles-pp.ktime_get




Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.


-- 
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests



View attachment "config-6.2.0-rc2-00033-g62640b40b22c" of type "text/plain" (166849 bytes)

View attachment "job-script" of type "text/plain" (8493 bytes)

View attachment "job.yaml" of type "text/plain" (5795 bytes)

View attachment "reproduce" of type "text/plain" (532 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ