[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20191119151344.GD10763@bfoster>
Date: Tue, 19 Nov 2019 10:13:44 -0500
From: Brian Foster <bfoster@...hat.com>
To: Dave Chinner <david@...morbit.com>
Cc: linux-xfs@...r.kernel.org, linux-fsdevel@...r.kernel.org,
linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 28/28] xfs: rework unreferenced inode lookups
On Mon, Nov 18, 2019 at 12:00:47PM +1100, Dave Chinner wrote:
> On Fri, Nov 15, 2019 at 12:26:00PM -0500, Brian Foster wrote:
> > On Fri, Nov 15, 2019 at 09:16:02AM +1100, Dave Chinner wrote:
> > > On Wed, Nov 06, 2019 at 05:18:46PM -0500, Brian Foster wrote:
> > > If so, most of this patch will go away....
> > >
> > > > > + * attached to the buffer so we don't need to do anything more here.
> > > > > */
> > > > > - if (ip != free_ip) {
> > > > > - if (!xfs_ilock_nowait(ip, XFS_ILOCK_EXCL)) {
> > > > > - rcu_read_unlock();
> > > > > - delay(1);
> > > > > - goto retry;
> > > > > - }
> > > > > -
> > > > > - /*
> > > > > - * Check the inode number again in case we're racing with
> > > > > - * freeing in xfs_reclaim_inode(). See the comments in that
> > > > > - * function for more information as to why the initial check is
> > > > > - * not sufficient.
> > > > > - */
> > > > > - if (ip->i_ino != inum) {
> > > > > + if (__xfs_iflags_test(ip, XFS_ISTALE)) {
> > > >
> > > > Is there a correctness reason for why we move the stale check to under
> > > > ilock (in both iflush/ifree)?
> > >
> > > It's under the i_flags_lock, and so I moved it up under the lookup
> > > hold of the i_flags_lock so we don't need to cycle it again.
> > >
> >
> > Yeah, but in both cases it looks like it moved to under the ilock as
> > well, which comes after i_flags_lock. IOW, why grab ilock for stale
> > inodes when we're just going to skip them?
>
> Because I was worrying about serialising against reclaim before
> changing the state of the inode. i.e. if the inode has already been
> isolated by not yet disposed of, we shouldn't touch the inode state
> at all. Serialisation against reclaim in this patch is via the
> ILOCK, hence we need to do that before setting ISTALE....
>
Yeah, I think my question still isn't clear... I'm not talking about
setting ISTALE. The code I referenced above is where we test for it and
skip the inode if it is already set. For example, the code referenced
above in xfs_ifree_get_one_inode() currently does the following with
respect to i_flags_lock, ILOCK and XFS_ISTALE:
...
spin_lock(i_flags_lock)
xfs_ilock_nowait(XFS_ILOCK_EXCL)
if !XFS_ISTALE
skip
set XFS_ISTALE
...
The reclaim isolate code does this, however:
spin_trylock(i_flags_lock)
if !XFS_ISTALE
skip
xfs_ilock(XFS_ILOCK_EXCL)
...
So my question is why not do something like the following in the
_get_one_inode() case?
...
spin_lock(i_flags_lock)
if !XFS_ISTALE
skip
xfs_ilock_nowait(XFS_ILOCK_EXCL)
set XFS_ISTALE
...
IOW, what is the need, if any, to acquire ilock in the iflush/ifree
paths before testing for XFS_ISTALE? Is there some specific intermediate
state I'm missing or is this just unintentional? The reason I ask is
ilock failure triggers that ugly delay(1) and retry thing, so it seems
slightly weird to allow that for a stale inode we're ultimately going to
skip (regardless of whether that would actually ever occur).
Brian
> IOWs, ISTALE is not protected by ILOCK, we just can't modify the
> inode state until after we've gained the ILOCK to protect against
> reclaim....
>
> Cheers,
>
> Dave.
> --
> Dave Chinner
> david@...morbit.com
>
Powered by blists - more mailing lists