[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Zs5Yac5V0pbz1PMF@dread.disaster.area>
Date: Wed, 28 Aug 2024 08:51:21 +1000
From: Dave Chinner <david@...morbit.com>
To: Christoph Hellwig <hch@....de>
Cc: "Darrick J. Wong" <djwong@...nel.org>,
Chandan Babu R <chandan.babu@...cle.com>,
Matthew Wilcox <willy@...radead.org>,
Andrew Morton <akpm@...ux-foundation.org>,
linux-xfs@...r.kernel.org, linux-kernel@...r.kernel.org,
linux-fsdevel@...r.kernel.org
Subject: Re: [PATCH 4/5] xfs: convert perag lookup to xarray
On Thu, Aug 22, 2024 at 05:45:48AM +0200, Christoph Hellwig wrote:
> On Wed, Aug 21, 2024 at 09:28:10AM -0700, Darrick J. Wong wrote:
> > On Wed, Aug 21, 2024 at 08:38:31AM +0200, Christoph Hellwig wrote:
> > > Convert the perag lookup from the legacy radix tree to the xarray,
> > > which allows for much nicer iteration and bulk lookup semantics.
> >
> > Looks like a pretty straightforward covnersion. Is there a good
> > justification for converting the ici radix tree too? Or is it too
> > sparse to be worth doing?
>
> radix trees and xarrays have pretty similar behavior related to
> sparseness or waste of interior nodes due to it.
And the node size is still 64 entries, which matches up with inode
chunk size. Hence a fully populated and cached inode chunk fills
xarray nodes completely, just like the radix tree. Hence if our
inode allocation locality decisions work, we end up with good
population characteristics in the in-memory cache index, too.
> So unless we find a
> better data structure for it, it would be worthwhile.
I have prototype patches to convert the ici radix tree to an xarray.
When I wrote it a few months ago I never got the time to actually
test it because other stuff happened....
> But the ici radix tree does pretty funny things in terms of also
> protecting other fields with the lock synchronizing it, so the conversion
> is fairly complicated
The locking isn't a big deal - I just used xa_lock() and xa_unlock()
to use the internal xarray lock to replace the perag->pag_ici_lock.
This gives the same semantics of making external state and tree
state updates atomic.
e.g. this code in xfs_reclaim_inode():
spin_lock(&pag->pag_ici_lock);
if (!radix_tree_delete(&pag->pag_ici_root,
XFS_INO_TO_AGINO(ip->i_mount, ino)))
ASSERT(0);
xfs_perag_clear_inode_tag(pag, NULLAGINO, XFS_ICI_RECLAIM_TAG);
spin_unlock(&pag->pag_ici_lock);
becomes:
xa_lock(&pag->pag_icache);
if (__xa_erase(&pag->pag_icache,
XFS_INO_TO_AGINO(ip->i_mount, ino)) != ip)
ASSERT(0);
xfs_perag_clear_inode_tag(pag, NULLAGINO, XFS_ICI_RECLAIM_TAG);
xa_unlock(&pag->pag_icache);
so the clearing of the XFS_ICI_RECLAIM_TAG in the mp->m_perag tree
is still atomic w.r.t. the removal of the inode from the icache
xarray.
> and I don't feel like doing it right now, at least
> no without evaluating if for example a rthashtable might actually be
> the better data structure here. The downside of the rthashtable is
> that it doens't support tags/masks and isn't great for iteration, so it
> might very much not be very suitable.
The rhashtable is not suited to the inode cache at all. A very
common access pattern is iterating all the inodes in an inode
cluster (e.g. in xfs_iflush_cluster() or during an icwalk) and with
a radix tree or xarray, these lookups all hit the same node and
cachelines. We've optimised this into gang lookups, which means
all the inodes in a cluster are fetched at the same time via
sequential memory access.
Move to a rhashtable makes this iteration mechanism impossible
because the rhashtable is unordered. Every inode we look up now
takes at least one cacheline miss because it's in some other
completely random index in the rhashtable and not adjacent to
the last inode we lookuped up. Worse, we have to dereference each
object we find on the chain to do key matching, so it's at least two
cacheline accesses per inode lookup.
So instead of a cluster lookup of 32 inodes only requiring a few
cacheline accesses to walk down the tree and then 4 sequential
cacheline accesses to retreive all the inode pointers, we have at
least 64 individual random cacheline accesses to get the pointers to
the same number of inodes.
IOWs, a hashtable of any kind is way more inefficient than using the
radix tree or xarray when it comes to the sorts of lockless
sequential access patterns we use internally with the XFS inode
cache.
Keep in mind that I went through all this "scalable structure
analysis" back in 2007 before I replaced the hash table based inode
cache implementation with radix trees. Radix trees were a far better
choice than a hash table way back then, and nothing in our inode
cache access patterns and algorithms has really changed since
then....
-Dave.
--
Dave Chinner
david@...morbit.com
Powered by blists - more mailing lists