[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250514130014.GA20738@lst.de>
Date: Wed, 14 May 2025 15:00:14 +0200
From: hch <hch@....de>
To: Hans Holmberg <Hans.Holmberg@....com>
Cc: "linux-xfs@...r.kernel.org" <linux-xfs@...r.kernel.org>,
Carlos Maiolino <cem@...nel.org>,
Dave Chinner <david@...morbit.com>,
"Darrick J . Wong" <djwong@...nel.org>, hch <hch@....de>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 0/2] Add mru cache for inode to zone allocation mapping
On Wed, May 14, 2025 at 10:50:36AM +0000, Hans Holmberg wrote:
> While I was initially concerned by adding overhead to the allocation
> path, the cache actually reduces it as as we avoid going through the
> zone allocation algorithm for every random write.
>
> When I run a fio workload with 16 writers to different files in
> parallel, bs=8k, iodepth=4, size=1G, I get these throughputs:
>
> baseline with_cache
> 774 MB/s 858 MB/s (+11%)
>
> (averaged over three runs ech on a nullblk device)
>
> I see similar, figures when benchmarking on a zns nvme drive (+17%).
Very nice!
These should probably go into the commit message for patch 2 so they
are recorded. Carlos, is that something you can do when applying?
Powered by blists - more mailing lists