lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20220506092250.GI23061@xsang-OptiPlex-9020>
Date:   Fri, 6 May 2022 17:22:50 +0800
From:   kernel test robot <oliver.sang@...el.com>
To:     Dave Chinner <david@...morbit.com>
Cc:     0day robot <lkp@...el.com>, LKML <linux-kernel@...r.kernel.org>,
        lkp@...ts.01.org, ying.huang@...el.com, feng.tang@...el.com,
        zhengjun.xing@...ux.intel.com, fengwei.yin@...el.com,
        linux-xfs@...r.kernel.org
Subject: [xfs]  32678f1513:  aim7.jobs-per-min -5.6% regression



Greeting,

FYI, we noticed a -5.6% regression of aim7.jobs-per-min due to commit:


commit: 32678f151338b9a321e9e27139a63c81f353acb7 ("[PATCH 1/4] xfs: detect self referencing btree sibling pointers")
url: https://github.com/intel-lab-lkp/linux/commits/Dave-Chinner/xfs-fix-random-format-verification-issues/20220502-162206
base: https://git.kernel.org/cgit/fs/xfs/xfs-linux.git for-next
patch link: https://lore.kernel.org/linux-xfs/20220502082018.1076561-2-david@fromorbit.com

in testcase: aim7
on test machine: 88 threads 2 sockets Intel(R) Xeon(R) Gold 6238M CPU @ 2.10GHz with 128G memory
with following parameters:

	disk: 4BRD_12G
	md: RAID0
	fs: xfs
	test: disk_wrt
	load: 3000
	cpufreq_governor: performance
	ucode: 0x500320a

test-description: AIM7 is a traditional UNIX system level benchmark suite which is used to test and measure the performance of multiuser system.
test-url: https://sourceforge.net/projects/aimbench/files/aim-suite7/



If you fix the issue, kindly add following tag
Reported-by: kernel test robot <oliver.sang@...el.com>


Details are as below:
-------------------------------------------------------------------------------------------------->


To reproduce:

        git clone https://github.com/intel/lkp-tests.git
        cd lkp-tests
        sudo bin/lkp install job.yaml           # job file is attached in this email
        bin/lkp split-job --compatible job.yaml # generate the yaml file for lkp run
        sudo bin/lkp run generated-yaml-file

        # if come across any failure that blocks the test,
        # please remove ~/.lkp and /lkp dir to run from a clean state.

=========================================================================================
compiler/cpufreq_governor/disk/fs/kconfig/load/md/rootfs/tbox_group/test/testcase/ucode:
  gcc-11/performance/4BRD_12G/xfs/x86_64-rhel-8.3/3000/RAID0/debian-10.4-x86_64-20200603.cgz/lkp-csl-2sp9/disk_wrt/aim7/0x500320a

commit: 
  a44a027a8b ("Merge tag 'large-extent-counters-v9' of https://github.com/chandanr/linux into xfs-5.19-for-next")
  32678f1513 ("xfs: detect self referencing btree sibling pointers")

a44a027a8b2a20fe 32678f151338b9a321e9e27139a 
---------------- --------------------------- 
         %stddev     %change         %stddev
             \          |                \  
    464232            -5.6%     438315        aim7.jobs-per-min
    702.64            +2.9%     723.04        aim7.time.system_time
     20.61            -3.7%      19.84        iostat.cpu.system
      2257 ± 14%     +35.0%       3047 ± 21%  sched_debug.cpu.avg_idle.min
     45360            -5.7%      42793        vmstat.system.cs
      5130 ±  5%     -13.7%       4427 ±  5%  meminfo.Dirty
      5972 ±  5%     -10.4%       5351 ±  4%  meminfo.Inactive(file)
      2778 ± 53%     -44.2%       1549 ± 77%  numa-meminfo.node0.Active(anon)
      6262 ±156%     -76.5%       1473 ± 15%  numa-meminfo.node1.Mapped
    694.17 ± 53%     -44.3%     386.83 ± 77%  numa-vmstat.node0.nr_active_anon
    694.17 ± 53%     -44.3%     386.83 ± 77%  numa-vmstat.node0.nr_zone_active_anon
      1676 ±151%     -75.8%     405.33 ± 10%  numa-vmstat.node1.nr_mapped
 8.246e+09            -3.9%  7.924e+09        perf-stat.i.branch-instructions
  23053543 ±  2%      -6.0%   21668177        perf-stat.i.cache-misses
     46885            -4.6%      44714        perf-stat.i.context-switches
 5.425e+10            -2.6%  5.285e+10        perf-stat.i.cpu-cycles
      1424            -8.9%       1296        perf-stat.i.cpu-migrations
 1.175e+10            -4.1%  1.127e+10        perf-stat.i.dTLB-loads
 5.701e+09            -5.3%  5.396e+09        perf-stat.i.dTLB-stores
  17300716           -12.4%   15151342        perf-stat.i.iTLB-load-misses
 4.151e+10            -3.8%  3.994e+10        perf-stat.i.instructions
      2341 ±  5%     +10.1%       2578        perf-stat.i.instructions-per-iTLB-miss
      0.62            -2.6%       0.60        perf-stat.i.metric.GHz
    292.90            -4.3%     280.20        perf-stat.i.metric.M/sec
   1997742 ±  2%      -6.2%    1874006        perf-stat.i.node-loads
   3534663            -5.9%    3326105        perf-stat.i.node-stores
      2354 ±  2%      +3.6%       2440 ±  2%  perf-stat.overall.cycles-between-cache-misses
     80.54            -1.7       78.83        perf-stat.overall.iTLB-load-miss-rate%
      2400            +9.8%       2636        perf-stat.overall.instructions-per-iTLB-miss
  8.05e+09            -3.7%  7.749e+09        perf-stat.ps.branch-instructions
  22503328 ±  2%      -5.9%   21186670        perf-stat.ps.cache-misses
     45772            -4.5%      43725        perf-stat.ps.context-switches
 5.296e+10            -2.4%  5.168e+10        perf-stat.ps.cpu-cycles
      1390            -8.8%       1267        perf-stat.ps.cpu-migrations
 1.147e+10            -4.0%  1.102e+10        perf-stat.ps.dTLB-loads
 5.565e+09            -5.2%  5.277e+09        perf-stat.ps.dTLB-stores
  16890063           -12.3%   14816058        perf-stat.ps.iTLB-load-misses
 4.053e+10            -3.6%  3.905e+10        perf-stat.ps.instructions
   1950358 ±  2%      -6.0%    1832583        perf-stat.ps.node-loads
   3450878            -5.7%    3252498        perf-stat.ps.node-stores
 1.625e+12            +2.8%   1.67e+12        perf-stat.total.instructions
      0.67 ± 12%      -0.1        0.57 ±  4%  perf-profile.calltrace.cycles-pp.file_update_time.xfs_file_write_checks.xfs_file_buffered_write.new_sync_write.vfs_write
      1.33            -0.1        1.26 ±  3%  perf-profile.calltrace.cycles-pp.__pagevec_release.truncate_inode_pages_range.evict.__dentry_kill.dentry_kill
      1.29            -0.1        1.22 ±  3%  perf-profile.calltrace.cycles-pp.release_pages.__pagevec_release.truncate_inode_pages_range.evict.__dentry_kill
      1.03 ±  2%      -0.1        0.98 ±  4%  perf-profile.calltrace.cycles-pp.__folio_mark_dirty.filemap_dirty_folio.iomap_write_end.iomap_write_iter.iomap_file_buffered_write
      0.53 ±  3%      +0.1        0.64 ±  4%  perf-profile.calltrace.cycles-pp.xfs_ifree.xfs_inactive_ifree.xfs_inactive.xfs_inodegc_worker.process_one_work
      0.88 ±  3%      +0.1        1.00 ±  3%  perf-profile.calltrace.cycles-pp.xfs_inactive_ifree.xfs_inactive.xfs_inodegc_worker.process_one_work.worker_thread
      0.95 ±  3%      +0.1        1.08 ±  3%  perf-profile.calltrace.cycles-pp.xfs_inactive.xfs_inodegc_worker.process_one_work.worker_thread.kthread
      0.96 ±  3%      +0.1        1.09 ±  3%  perf-profile.calltrace.cycles-pp.xfs_inodegc_worker.process_one_work.worker_thread.kthread.ret_from_fork
      0.98 ±  3%      +0.1        1.11 ±  3%  perf-profile.calltrace.cycles-pp.process_one_work.worker_thread.kthread.ret_from_fork
      0.99 ±  3%      +0.1        1.12 ±  3%  perf-profile.calltrace.cycles-pp.worker_thread.kthread.ret_from_fork
      0.99 ±  3%      +0.1        1.13 ±  3%  perf-profile.calltrace.cycles-pp.ret_from_fork
      0.99 ±  3%      +0.1        1.13 ±  3%  perf-profile.calltrace.cycles-pp.kthread.ret_from_fork
      1.11 ±  3%      +0.2        1.26 ±  6%  perf-profile.calltrace.cycles-pp.lookup_open.open_last_lookups.path_openat.do_filp_open.do_sys_openat2
      0.95 ±  4%      +0.2        1.10 ±  6%  perf-profile.calltrace.cycles-pp.xfs_create.xfs_generic_create.lookup_open.open_last_lookups.path_openat
      0.96 ±  4%      +0.2        1.12 ±  6%  perf-profile.calltrace.cycles-pp.xfs_generic_create.lookup_open.open_last_lookups.path_openat.do_filp_open
      0.08 ±223%      +0.5        0.61 ±  8%  perf-profile.calltrace.cycles-pp.xfs_dialloc.xfs_create.xfs_generic_create.lookup_open.open_last_lookups
      0.00            +0.5        0.55 ±  4%  perf-profile.calltrace.cycles-pp.xfs_difree.xfs_ifree.xfs_inactive_ifree.xfs_inactive.xfs_inodegc_worker
      0.39 ±  9%      -0.1        0.31 ±  5%  perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
      1.33            -0.1        1.26 ±  3%  perf-profile.children.cycles-pp.__pagevec_release
      1.38            -0.1        1.32 ±  3%  perf-profile.children.cycles-pp.release_pages
      0.59 ±  3%      -0.1        0.53 ±  3%  perf-profile.children.cycles-pp._raw_spin_lock_irqsave
      1.00 ±  2%      -0.1        0.95 ±  3%  perf-profile.children.cycles-pp.folio_alloc
      0.36 ±  5%      -0.0        0.31 ±  4%  perf-profile.children.cycles-pp.folio_lruvec_lock_irqsave
      0.38 ±  4%      -0.0        0.36 ±  3%  perf-profile.children.cycles-pp.__might_sleep
      0.02 ±141%      +0.0        0.06 ±  9%  perf-profile.children.cycles-pp.xfs_itruncate_extents_flags
      0.28 ±  5%      +0.1        0.37 ±  4%  perf-profile.children.cycles-pp.xfs_difree_inobt
      0.19 ±  3%      +0.1        0.29 ±  7%  perf-profile.children.cycles-pp.xfs_btree_get_rec
      0.44 ±  4%      +0.1        0.55 ±  4%  perf-profile.children.cycles-pp.xfs_difree
      0.20 ±  9%      +0.1        0.30 ±  7%  perf-profile.children.cycles-pp.xfs_btree_increment
      0.53 ±  3%      +0.1        0.64 ±  4%  perf-profile.children.cycles-pp.xfs_ifree
      0.88 ±  3%      +0.1        1.00 ±  3%  perf-profile.children.cycles-pp.xfs_inactive_ifree
      0.38 ±  4%      +0.1        0.50 ±  6%  perf-profile.children.cycles-pp.xfs_inobt_get_rec
      0.95 ±  3%      +0.1        1.08 ±  3%  perf-profile.children.cycles-pp.xfs_inactive
      0.96 ±  3%      +0.1        1.09 ±  3%  perf-profile.children.cycles-pp.xfs_inodegc_worker
      0.42 ±  3%      +0.1        0.55 ±  7%  perf-profile.children.cycles-pp.xfs_dialloc_ag
      0.98 ±  3%      +0.1        1.11 ±  3%  perf-profile.children.cycles-pp.process_one_work
      0.48 ±  3%      +0.1        0.61 ±  8%  perf-profile.children.cycles-pp.xfs_dialloc
      0.99 ±  3%      +0.1        1.12 ±  3%  perf-profile.children.cycles-pp.worker_thread
      0.99 ±  3%      +0.1        1.13 ±  3%  perf-profile.children.cycles-pp.kthread
      0.99 ±  3%      +0.1        1.14 ±  3%  perf-profile.children.cycles-pp.ret_from_fork
      1.11 ±  3%      +0.2        1.26 ±  6%  perf-profile.children.cycles-pp.lookup_open
      0.95 ±  4%      +0.2        1.10 ±  6%  perf-profile.children.cycles-pp.xfs_create
      0.96 ±  4%      +0.2        1.12 ±  6%  perf-profile.children.cycles-pp.xfs_generic_create
      0.13 ±  5%      +0.2        0.33 ±  6%  perf-profile.children.cycles-pp.__xfs_btree_check_sblock
      0.22 ±  6%      +0.2        0.43 ±  6%  perf-profile.children.cycles-pp.xfs_btree_check_sblock
      0.68 ±  4%      +0.2        0.91 ±  5%  perf-profile.children.cycles-pp.xfs_check_agi_freecount
      0.39 ±  9%      -0.1        0.31 ±  4%  perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath
      0.43 ±  4%      -0.0        0.40 ±  3%  perf-profile.self.cycles-pp.__mod_memcg_lruvec_state
      0.21 ±  4%      -0.0        0.19 ±  5%  perf-profile.self.cycles-pp.file_update_time
      0.11 ±  4%      +0.2        0.30 ±  5%  perf-profile.self.cycles-pp.__xfs_btree_check_sblock




Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.


-- 
0-DAY CI Kernel Test Service
https://01.org/lkp



View attachment "config-5.18.0-rc2-00066-g32678f151338" of type "text/plain" (162679 bytes)

View attachment "job-script" of type "text/plain" (8379 bytes)

View attachment "job.yaml" of type "text/plain" (5677 bytes)

View attachment "reproduce" of type "text/plain" (1028 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ