[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <Zr61Ndq8FFJ0/S8K@xsang-OptiPlex-9020>
Date: Fri, 16 Aug 2024 10:11:01 +0800
From: Oliver Sang <oliver.sang@...el.com>
To: Christoph Hellwig <hch@....de>
CC: <oe-lkp@...ts.linux.dev>, <lkp@...el.com>, <linux-kernel@...r.kernel.org>,
Anna Schumaker <Anna.Schumaker@...app.com>, Sagi Grimberg <sagi@...mberg.me>,
<linux-nfs@...r.kernel.org>, <ying.huang@...el.com>, <feng.tang@...el.com>,
<fengwei.yin@...el.com>, <oliver.sang@...el.com>
Subject: Re: [linus:master] [nfs] 49b29a573d: filebench.sum_operations/s
-85.6% regression
hi, Christoph Hellwig,
On Wed, Aug 14, 2024 at 03:04:15PM +0200, Christoph Hellwig wrote:
> > sorry I don't have many details. not sure if https://github.com/filebench/filebench/wiki
> > is helpful for you?
>
> Not too much. Especially as I'm not sure what you are actually
> running. If I run the workloads/randomrw.f from the filebench git
> repository, it fails to run due to a lack of a run statement, and it
> also hardcodes /tmp. Can you share the actual randomrw.f used for the
> test?
please refer to
https://download.01.org/0day-ci/archive/20240808/202408081514.106c770e-oliver.sang@intel.com/repro-script
for our bot setup [1]
for the 'run statement' issue, you should append a such like
run 60
in workload file workloads/randomrw.f end part
(some workload files under https://github.com/filebench/filebench/blob/master/workloads/
have this, some don't)
[1]
dmsetup remove_all
wipefs -a --force /dev/sda1
mkfs -t ext4 -q -E lazy_itable_init=0,lazy_journal_init=0 -F /dev/sda1
mkdir -p /fs/sda1
mount -t ext4 /dev/sda1 /fs/sda1
mkdir /export
mount -t tmpfs nfsv4_root_export /export
mkdir -p /export//fs/sda1
mount --bind /fs/sda1 /export//fs/sda1
echo '/export//fs/sda1 *(rw,no_subtree_check,no_root_squash)' >> /etc/exports
systemctl restart rpcbind
systemctl restart rpc-statd
systemctl restart nfs-idmapd
systemctl restart nfs-server
mkdir -p /nfs/sda1
timeout 5m mount -t nfs -o vers=4 localhost:/fs/sda1 /nfs/sda1
touch /nfs/sda1/wait_for_nfs_grace_period
sync
echo 3 > /proc/sys/vm/drop_caches
for cpu_dir in /sys/devices/system/cpu/cpu[0-9]*
do
online_file="$cpu_dir"/online
[ -f "$online_file" ] && [ "$(cat "$online_file")" -eq 0 ] && continue
file="$cpu_dir"/cpufreq/scaling_governor
[ -f "$file" ] && echo "performance" > "$file"
done
filebench -f /lkp/benchmarks/filebench/share/filebench/workloads/randomrw.f
sleep 100
rm -rf /nfs/sda1/largefile1 /nfs/sda1/lost+found /nfs/sda1/wait_for_nfs_grace_period
>
> Also do you run this test on other local file systems exported by
> NFS, e.g. XFS and do you have numbers for that?
>
we tested the xfs, seems no big diff between 9aac777aaf and 49b29a573d.
39c910a430 seems have a drop.
=========================================================================================
compiler/cpufreq_governor/disk/fs2/fs/kconfig/rootfs/tbox_group/test/testcase:
gcc-12/performance/1HDD/nfsv4/xfs/x86_64-rhel-8.3/debian-12-x86_64-20240206.cgz/lkp-icl-2sp6/randomrw.f/filebench
<---- only change part is 'xfs'
commit:
9aac777aaf ("filemap: Convert generic_perform_write() to support large folios")
49b29a573d ("nfs: add support for large folios")
39c910a430 ("nfs: do not extend writes to the entire folio")
9aac777aaf945978 49b29a573da83b65d5f4ecf2db6 39c910a430370fd25d5b5e4b2f4
---------------- --------------------------- ---------------------------
%stddev %change %stddev %change %stddev
\ | \ | \
25513 ± 22% +2.7% 26206 ± 22% -19.2% 20609 ± 9% filebench.sum_operations/s
but all above data is not stable. kernel test bot won't recognize performance
changes based on this kind of data.
below detail data just FYI.
for 9aac777aaf ("filemap: Convert generic_perform_write() to support large folios")
"filebench.sum_operations/s": [
24749.259,
24963.29,
39646.08,
19061.09,
21232.461,
23361.606,
24028.835,
27065.077
],
for 49b29a573d ("nfs: add support for large folios")
"filebench.sum_operations/s": [
22241.08,
23600.1,
36988.03,
23380.36,
18751.434,
25665.28,
35146.827,
23874.923
],
for 39c910a430 ("nfs: do not extend writes to the entire folio")
"filebench.sum_operations/s": [
22756.036,
19669.426,
22478.429,
18800.486,
22682.041,
17167.844,
19160.962,
22163.603
],
Powered by blists - more mailing lists