[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250514104937.15380-1-hans.holmberg@wdc.com>
Date: Wed, 14 May 2025 10:50:36 +0000
From: Hans Holmberg <Hans.Holmberg@....com>
To: "linux-xfs@...r.kernel.org" <linux-xfs@...r.kernel.org>
CC: Carlos Maiolino <cem@...nel.org>, Dave Chinner <david@...morbit.com>,
"Darrick J . Wong" <djwong@...nel.org>, hch <hch@....de>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>, Hans Holmberg
<Hans.Holmberg@....com>
Subject: [PATCH 0/2] Add mru cache for inode to zone allocation mapping
These patches cleans up the xfs mru code a bit and adds a cache for
keeping track of which zone an inode allocated data to last. Placing
file data in the same zone helps reduce garbage collection overhead,
and with this patch we add support per-file co-location for random
writes.
While I was initially concerned by adding overhead to the allocation
path, the cache actually reduces it as as we avoid going through the
zone allocation algorithm for every random write.
When I run a fio workload with 16 writers to different files in
parallel, bs=8k, iodepth=4, size=1G, I get these throughputs:
baseline with_cache
774 MB/s 858 MB/s (+11%)
(averaged over three runs ech on a nullblk device)
I see similar, figures when benchmarking on a zns nvme drive (+17%).
No updates in the code since the RFC:
https://www.spinics.net/lists/linux-xfs/msg98889.html
Christoph Hellwig (1):
xfs: free the item in xfs_mru_cache_insert on failure
Hans Holmberg (1):
xfs: add inode to zone caching for data placement
fs/xfs/xfs_filestream.c | 15 ++----
fs/xfs/xfs_mount.h | 1 +
fs/xfs/xfs_mru_cache.c | 15 ++++--
fs/xfs/xfs_zone_alloc.c | 109 ++++++++++++++++++++++++++++++++++++++++
4 files changed, 126 insertions(+), 14 deletions(-)
--
2.34.1
Powered by blists - more mailing lists