[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <202212130906.15eab7ed-yujie.liu@intel.com>
Date: Tue, 13 Dec 2022 10:00:45 +0800
From: kernel test robot <yujie.liu@...el.com>
To: Brian Foster <bfoster@...hat.com>
CC: <oe-lkp@...ts.linux.dev>, <lkp@...el.com>,
<linux-kernel@...r.kernel.org>, <ying.huang@...el.com>,
<feng.tang@...el.com>, <zhengjun.xing@...ux.intel.com>,
<fengwei.yin@...el.com>, <linux-mm@...ck.org>,
<linux-fsdevel@...r.kernel.org>, <ikent@...hat.com>,
<onestero@...hat.com>, <willy@...radead.org>, <ebiederm@...hat.com>
Subject: Re: [PATCH v3 4/5] pid: mark pids associated with group leader tasks
Greeting,
FYI, we noticed a -4.7% regression of stress-ng.vfork.ops_per_sec due to commit:
commit: 88294e6f6d1e1a9169cc9b715050bd8b52ac5f44 ("[PATCH v3 4/5] pid: mark pids associated with group leader tasks")
url: https://github.com/intel-lab-lkp/linux/commits/Brian-Foster/proc-improve-root-readdir-latency-with-many-threads/20221203-012018
base: https://git.kernel.org/cgit/linux/kernel/git/powerpc/linux.git next
patch link: https://lore.kernel.org/all/20221202171620.509140-5-bfoster@redhat.com/
patch subject: [PATCH v3 4/5] pid: mark pids associated with group leader tasks
in testcase: stress-ng
on test machine: 128 threads 2 sockets Intel(R) Xeon(R) Platinum 8358 CPU @ 2.60GHz (Ice Lake) with 128G memory
with following parameters:
nr_threads: 100%
testtime: 60s
sc_pid_max: 4194304
class: scheduler
test: vfork
cpufreq_governor: performance
Details are as below:
=========================================================================================
class/compiler/cpufreq_governor/kconfig/nr_threads/rootfs/sc_pid_max/tbox_group/test/testcase/testtime:
scheduler/gcc-11/performance/x86_64-rhel-8.3/100%/debian-11.1-x86_64-20220510.cgz/4194304/lkp-icl-2sp5/vfork/stress-ng/60s
commit:
eae2900480 ("pid: switch pid_namespace from idr to xarray")
88294e6f6d ("pid: mark pids associated with group leader tasks")
eae2900480d61b93 88294e6f6d1e1a9169cc9b71505
---------------- ---------------------------
%stddev %change %stddev
\ | \
26176589 ± 3% -5.4% 24757728 stress-ng.time.voluntary_context_switches
11320148 -4.7% 10789611 stress-ng.vfork.ops
188669 -4.7% 179826 stress-ng.vfork.ops_per_sec
48483 ± 13% +17.3% 56864 ± 3% numa-vmstat.node1.nr_slab_unreclaimable
721230 ± 4% -5.7% 679849 vmstat.system.cs
1192641 ± 3% -21.4% 937830 sched_debug.cpu.curr->pid.max
469229 ± 9% -18.1% 384233 ± 5% sched_debug.cpu.curr->pid.stddev
768469 ± 4% -6.0% 722315 perf-stat.i.context-switches
0.09 ± 5% -0.0 0.07 ± 3% perf-stat.i.dTLB-load-miss-rate%
10758930 ± 5% -11.1% 9564450 ± 3% perf-stat.i.dTLB-load-misses
2480878 ± 2% -4.1% 2380112 ± 2% perf-stat.i.dTLB-store-misses
0.15 +2.7% 0.15 perf-stat.i.ipc
23566766 -3.5% 22751863 perf-stat.i.node-load-misses
10701300 -4.6% 10206578 perf-stat.i.node-store-misses
0.09 ± 4% -0.0 0.08 ± 4% perf-stat.overall.dTLB-load-miss-rate%
743961 ± 4% -6.0% 699259 perf-stat.ps.context-switches
10436726 ± 5% -11.3% 9261813 ± 4% perf-stat.ps.dTLB-load-misses
22838094 -3.4% 22057626 perf-stat.ps.node-load-misses
10394494 -4.3% 9950784 perf-stat.ps.node-store-misses
If you fix the issue, kindly add following tag
| Reported-by: kernel test robot <yujie.liu@...el.com>
| Link: https://lore.kernel.org/oe-lkp/202212130906.15eab7ed-yujie.liu@intel.com
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
sudo bin/lkp install job.yaml # job file is attached in this email
bin/lkp split-job --compatible job.yaml # generate the yaml file for lkp run
sudo bin/lkp run generated-yaml-file
# if come across any failure that blocks the test,
# please remove ~/.lkp and /lkp dir to run from a clean state.
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
--
0-DAY CI Kernel Test Service
https://01.org/lkp
View attachment "config-6.1.0-rc2-00155-g88294e6f6d1e" of type "text/plain" (166032 bytes)
View attachment "job-script" of type "text/plain" (8272 bytes)
View attachment "job.yaml" of type "text/plain" (5581 bytes)
View attachment "reproduce" of type "text/plain" (383 bytes)
Powered by blists - more mailing lists