[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aUJ4rjyAOW3EWC-k@infradead.org>
Date: Wed, 17 Dec 2025 01:32:30 -0800
From: Christoph Hellwig <hch@...radead.org>
To: Trond Myklebust <trondmy@...nel.org>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>,
linux-nfs@...r.kernel.org, linux-kernel@...r.kernel.org,
Alexander Viro <viro@...iv.linux.org.uk>,
Christian Brauner <brauner@...nel.org>,
linux-fsdevel@...r.kernel.org
Subject: NFS dentry caching regression? was Re: [GIT PULL] Please pull NFS
client updates for Linux 6.19
Hi all,
the merge of this branch causes NFS lookup operation to shoot up a lot
for me. And with merge I mean merge - both parent of the merge on their
own are fine.
With the script below that simulates running python scripts with lots
of imports that was created to benchmark delegation performance, the
number of lookups in the measurement period shoots up from 4 to about
410000, which is a bit suboptimal. I have no idea how this could
happen, but it must be related to some sort of pathname lookup changes
I guess. Other operations looks roughly the same.
---
#!/usr/bin/env bash
set -euo pipefail
if [ $# -ne 1 ]; then
echo "Usage: $0 <NFS_MOUNT_PATH>"
exit 1
fi
NFS_MOUNT="$1"
WARMUP_FILE_COUNT="8000"
RUNS=200
MODULE_COUNT=200
SAVEFILE="/tmp/nfsstat.bak"
echo "=== NFS delegation benchmark ==="
echo "NFS mount: $NFS_MOUNT"
echo "Warmup file count: $WARMUP_FILE_COUNT"
echo "Number of runs: $RUNS"
echo "Module count: $MODULE_COUNT"
echo
################################################################################
# Step 1: Create temporary directory on NFS
################################################################################
TEST_DIR=$(mktemp -d "$NFS_MOUNT/test_deleg_bench.XXXXXX")
MODULE_DIR="$TEST_DIR/pymods"
mkdir -p "$MODULE_DIR/delegtest"
MODDIR_INIT="$MODULE_DIR/delegtest/__init__.py"
cat > "$MODDIR_INIT" <<EOF
import os
import glob
file_paths = glob.glob(os.path.join(os.path.dirname(__file__), "*.py"))
__all__ = [
os.path.basename(f)[:-3]
for f in file_paths
if os.path.isfile(f) and not f.endswith("__init__.py")
]
EOF
echo "[1] Creating $WARMUP_FILE_COUNT tiny files to accumulate delegations..."
mkdir -p "$TEST_DIR/fill"
for i in $(seq 1 "$WARMUP_FILE_COUNT"); do
echo "f$i" > "$TEST_DIR/fill/file_$i"
done
echo "[1] Warmup delegation files created."
################################################################################
# Step 2: Create many tiny Python modules to exercise import workload
################################################################################
echo "[2] Creating $MODULE_COUNT dummy python modules..."
for i in $(seq 1 "$MODULE_COUNT"); do
echo "x = $i" > "$MODULE_DIR/delegtest/mod$i.py"
done
#umount -o remount $NFS_MOUNT
# Python snippet:
# repeatedly import all modN modules; measure iterations completed
BENCH_SCRIPT="$TEST_DIR/bench.py"
cat > "$BENCH_SCRIPT" <<EOF
import sys
sys.path.insert(0, "$MODULE_DIR")
from delegtest import *
EOF
################################################################################
# Step 3: Pre-benchmark NFS client counters
################################################################################
echo "[3] Capturing baseline NFS client stats..."
sync $NFS_MOUNT
#mount -o remount $NFS_MOUNT
cp /proc/net/rpc/nfs $SAVEFILE
################################################################################
# Step 4: Run Python benchmark
################################################################################
echo "[4] Running Python import benchmark..."
time {
for i in $(seq 1 "$RUNS"); do
python3 "$BENCH_SCRIPT"
done
}
################################################################################
# Step 5: Produce NFS client delta report
################################################################################
echo
echo "=== NFS client DELTA REPORT ==="
echo
nfsstat -c -S $SAVEFILE -l
rm -f $SAVEFILE
echo
echo "Test directory: $TEST_DIR"
echo
echo "=== Done ==="
Powered by blists - more mailing lists