[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20250923184655.GF1587915@frogsfrogsfrogs>
Date: Tue, 23 Sep 2025 11:46:55 -0700
From: "Darrick J. Wong" <djwong@...nel.org>
To: Christoph Hellwig <hch@....de>
Cc: wangyufei <wangyufei@...o.com>, viro@...iv.linux.org.uk,
brauner@...nel.org, jack@...e.cz, cem@...nel.org,
kundan.kumar@...sung.com, anuj20.g@...sung.com, bernd@...ernd.com,
david@...morbit.com, linux-kernel@...r.kernel.org,
linux-xfs@...r.kernel.org, linux-fsdevel@...r.kernel.org,
opensource.kernel@...o.com
Subject: Re: [RFC 2/2] xfs: implement get_inode_wb_ctx_idx() for per-AG
parallel writeback
On Mon, Sep 22, 2025 at 06:56:42PM +0200, Christoph Hellwig wrote:
> On Sun, Sep 14, 2025 at 08:11:09PM +0800, wangyufei wrote:
> > The number of writeback contexts is set to the number of CPUs by
> > default. This allows XFS to decide how to assign inodes to writeback
> > contexts based on its allocation groups.
> >
> > Implement get_inode_wb_ctx_idx() in xfs_super_operations as follows:
> > - Limit the number of active writeback contexts to the number of AGs.
> > - Assign inodes from the same AG to a unique writeback context.
>
> I'm not sure this actually works. Data is spread over AGs, just with
> a default to the parent inode AG if there is space, and even that isn't
> true for the inode32 option or when using the RT subvolume.
I don't know of a better way to shard cheaply -- if you could group
inodes dynamically by a rough estimate of the AGs that map to the dirty
data (especially delalloc/unwritten/cow mappings) then that would be an
improvement, but that's still far from what I would consider the ideal.
Ideally (maybe?) one could shard dirty ranges first by the amount of
effort (pure overwrite; secondly backed-by-unwritten; thirdly
delalloc/cow). The first two groups could then be sharded by AG and
issued in parallel. The third group involve so much metadata changes
that you could probably just shard evenly across CPUs. Writebacks get
initiated in that order, and then we see where the bottlenecks lie in
ioend completion.
(But that's just my hazy untested brai^Widea :P)
--D
> > +
> > + if (mp->m_sb.sb_agcount <= nr_wb_ctx)
> > + return XFS_INO_TO_AGNO(mp, xfs_inode->i_ino);
> > + return xfs_inode->i_ino % nr_wb_ctx;
> > +}
> > +
> > static const struct super_operations xfs_super_operations = {
> > .alloc_inode = xfs_fs_alloc_inode,
> > .destroy_inode = xfs_fs_destroy_inode,
> > @@ -1295,6 +1308,7 @@ static const struct super_operations xfs_super_operations = {
> > .free_cached_objects = xfs_fs_free_cached_objects,
> > .shutdown = xfs_fs_shutdown,
> > .show_stats = xfs_fs_show_stats,
> > + .get_inode_wb_ctx_idx = xfs_fs_get_inode_wb_ctx_idx,
> > };
> >
> > static int
> > --
> > 2.34.1
> ---end quoted text---
>
Powered by blists - more mailing lists