lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <cover.1756635044.git.repk@triplefau.lt>
Date: Sun, 31 Aug 2025 21:03:38 +0200
From: Remi Pommarel <repk@...plefau.lt>
To: v9fs@...ts.linux.dev
Cc: linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org,
 Eric Van Hensbergen <ericvh@...nel.org>,
 Latchesar Ionkov <lucho@...kov.net>,
 Dominique Martinet <asmadeus@...ewreck.org>,
 Christian Schoenebeck <linux_oss@...debyte.com>,
 Remi Pommarel <repk@...plefau.lt>
Subject: [RFC PATCH 0/5] 9p: Performance improvements for build workloads

This patchset introduces several performance optimizations for the 9p
filesystem when used with cache=loose option (exclusive or read only
mounts). These improvements particularly target workloads with frequent
lookups of non-existent paths and repeated symlink resolutions.

The very state of the art benchmark consisting of cloning a fresh
hostap repository and building hostapd and wpa_supplicant for hwsim
tests (cd tests/hwsim; time ./build.sh) in a VM running on a 9pfs rootfs
(with trans=virtio,cache=loose options) has been used to test those
optimizations impact.

For reference, the build takes 0m56.492s on my laptop natively while it
completes in 2m18.702sec on the VM. This represents a significant
performance penalty considering running the same build on a VM using a
virtiofs rootfs (with "--cache always" virtiofsd option) takes around
1m32.141s. This patchset aims to bring the 9pfs build time close to
that of virtiofs, rather than the native host time, as a realistic
expectation.

This first two patches in this series focus on keeping negative dentries
in the cache, ensuring that subsequent lookups for paths known to not
exist do not require redundant 9P RPC calls. This optimization reduces
the time needed for the compiler to search for header files across known
locations. These two patches introduce a new mount option, ndentrytmo,
which specifies the number of ms to keep the dentry in the cache. Using
ndentrytmo=-1 (keeping the negative dentry indifinetly) shrunk build
time to 1m46.198s.

The third patch extends page cache usage to symlinks by allowing
p9_client_readlink() results to be cached. Resolving symlink is
apparently something done quite frequently during the build process and
avoiding the cost of a 9P RPC call round trip for already known symlinks
helps reduce the build time to 1m26.602s, outperforming the virtiofs
setup.

The last two patches are only here to attribute time spent waiting
for server responses during a 9P RPC call to I/0 wait time in system
metrics.

Here is summary of the different hostapd/wpa_supplicant build times:

  - Baseline (no patch): 2m18.702s
  - negative dentry caching (patches 1-2): 1m46.198s (23% improvement)
  - Above + symlink caching (patches 1-3): 1m26.302s (an additional 18%
    improvement, 37% in total)

With this ~37% performance gain, 9pfs with cache=loose can compete with
virtiofs for (at least) this specific scenario. Although this benchmark
is not the most typical, I do think that these caching optimizations
could benefit a wide range of other workflows as well.

Further investigation may be needed to address the remaining gap with
native build performance. Using the last two patches it appears there is
still a fair amount of time spent waiting for I/O, though. This could be
related to the two systematic RPC calls made when opening a file (one to
clone the fid and another one to open the file). Maybe reusing fids or
openned files could potentially reduce client/server transactions and
bring performance even closer to native levels ? But that are just
random thoughs I haven't dig enough yet.

Any feedbacks on this approach would be welcomed,

Thanks.

Best regards,

-- 
Remi

Remi Pommarel (5):
  9p: Cache negative dentries for lookup performance
  9p: Introduce option for negative dentry cache retention time
  9p: Enable symlink caching in page cache
  wait: Introduce io_wait_event_killable()
  9p: Track 9P RPC waiting time as IO

 fs/9p/fid.c            |  11 +++--
 fs/9p/v9fs.c           |  16 +++++-
 fs/9p/v9fs.h           |   3 ++
 fs/9p/v9fs_vfs.h       |  15 ++++++
 fs/9p/vfs_dentry.c     | 109 +++++++++++++++++++++++++++++++++++------
 fs/9p/vfs_inode.c      |  14 ++++--
 fs/9p/vfs_inode_dotl.c |  94 +++++++++++++++++++++++++++++++----
 include/linux/wait.h   |  15 ++++++
 net/9p/client.c        |   4 +-
 9 files changed, 244 insertions(+), 37 deletions(-)

-- 
2.50.1


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ