[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250828114613.GC2848932@perftesting>
Date: Thu, 28 Aug 2025 07:46:13 -0400
From: Josef Bacik <josef@...icpanda.com>
To: Dave Chinner <david@...morbit.com>
Cc: Christian Brauner <brauner@...nel.org>, linux-fsdevel@...r.kernel.org,
linux-btrfs@...r.kernel.org, kernel-team@...com,
linux-ext4@...r.kernel.org, linux-xfs@...r.kernel.org,
viro@...iv.linux.org.uk, amir73il@...il.com
Subject: Re: [PATCH v2 17/54] fs: remove the inode from the LRU list on
unlink/rmdir
On Thu, Aug 28, 2025 at 08:01:39AM +1000, Dave Chinner wrote:
> On Wed, Aug 27, 2025 at 02:32:49PM +0200, Christian Brauner wrote:
> > On Tue, Aug 26, 2025 at 11:39:17AM -0400, Josef Bacik wrote:
> > > We can end up with an inode on the LRU list or the cached list, then at
> > > some point in the future go to unlink that inode and then still have an
> > > elevated i_count reference for that inode because it is on one of these
> > > lists.
> > >
> > > The more common case is the cached list. We open a file, write to it,
> > > truncate some of it which triggers the inode_add_lru code in the
> > > pagecache, adding it to the cached LRU. Then we unlink this inode, and
> > > it exists until writeback or reclaim kicks in and removes the inode.
> > >
> > > To handle this case, delete the inode from the LRU list when it is
> > > unlinked, so we have the best case scenario for immediately freeing the
> > > inode.
> > >
> > > Signed-off-by: Josef Bacik <josef@...icpanda.com>
> > > ---
> >
> > I'm not too fond of this particular change I think it's really misplaced
> > and the correct place is indeed drop_nlink() and clear_nlink().
>
> I don't really like putting it in drop_nlink because that then puts
> the inode LRU in the middle of filesystem transactions when lots of
> different filesystem locks are held.
>
> IF the LRU operations are in the VFS, then we know exactly what
> locks are held when it is performed (current behaviour). However,
> when done from the filesystem transaction context running
> drop_nlink, we'll have different sets of locks and/or execution
> contexts held for each different fs type.
>
> > I'm pretty sure that the number of callers that hold i_lock around
> > drop_nlink() and clear_nlink() is relatively small.
>
> I think the calling context problem is wider than the obvious issue
> with i_lock....
This is an internal LRU, so yes potentially we could have locking issues, but
right now all LRU operations are nested inside of the i_lock, and this is purely
about object lifetime. I'm not concerned about this being in the bowls of any
filesystem because it's purely list manipulation.
And if it makes you feel better, the next patchset queued up for after the next
merge window is deleting the LRU, so you won't have to worry about it for long
:). Thanks,
Josef
Powered by blists - more mailing lists