[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <sa3yttklz3onf627vxqcjysgyoa455r3z7mgmbzmn3pgs7eawb@43tke54bauuz>
Date: Wed, 14 May 2025 15:51:54 +0200
From: Carlos Maiolino <cem@...nel.org>
To: hch <hch@....de>
Cc: Hans Holmberg <Hans.Holmberg@....com>,
"linux-xfs@...r.kernel.org" <linux-xfs@...r.kernel.org>, Dave Chinner <david@...morbit.com>,
"Darrick J . Wong" <djwong@...nel.org>, "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 0/2] Add mru cache for inode to zone allocation mapping
On Wed, May 14, 2025 at 03:00:14PM +0200, hch wrote:
> On Wed, May 14, 2025 at 10:50:36AM +0000, Hans Holmberg wrote:
> > While I was initially concerned by adding overhead to the allocation
> > path, the cache actually reduces it as as we avoid going through the
> > zone allocation algorithm for every random write.
> >
> > When I run a fio workload with 16 writers to different files in
> > parallel, bs=8k, iodepth=4, size=1G, I get these throughputs:
> >
> > baseline with_cache
> > 774 MB/s 858 MB/s (+11%)
> >
> > (averaged over three runs ech on a nullblk device)
> >
> > I see similar, figures when benchmarking on a zns nvme drive (+17%).
>
> Very nice!
>
> These should probably go into the commit message for patch 2 so they
> are recorded. Carlos, is that something you can do when applying?
>
Absolutely. Could you RwB patch 1? I just got your RwB on patch 2.
I'll add this to the tree today, I need to do another rebase anyway.
Powered by blists - more mailing lists