[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20220225231618.GQ3061737@dread.disaster.area>
Date: Sat, 26 Feb 2022 10:16:18 +1100
From: Dave Chinner <david@...morbit.com>
To: "Darrick J. Wong" <djwong@...nel.org>
Cc: Andreas Dilger <adilger@...ger.ca>, NeilBrown <neilb@...e.de>,
Al Viro <viro@...iv.linux.org.uk>,
Linux NFS Mailing List <linux-nfs@...r.kernel.org>,
linux-fsdevel@...r.kernel.org, LKML <linux-kernel@...r.kernel.org>,
Daire Byrne <daire@...g.com>,
Andreas Dilger <adilger.kernel@...ger.ca>
Subject: Re: [PATCH/RFC] VFS: support parallel updates in the one directory.
On Thu, Feb 24, 2022 at 03:38:48PM -0800, Darrick J. Wong wrote:
> On Thu, Feb 24, 2022 at 09:31:28AM -0700, Andreas Dilger wrote:
> > On Feb 23, 2022, at 22:57, NeilBrown <neilb@...e.de> wrote:
> > >
> > >
> > > I added this:
> > > --- a/fs/xfs/xfs_icache.c
> > > +++ b/fs/xfs/xfs_icache.c
> > > @@ -87,6 +87,7 @@ xfs_inode_alloc(
> > > /* VFS doesn't initialise i_mode or i_state! */
> > > VFS_I(ip)->i_mode = 0;
> > > VFS_I(ip)->i_state = 0;
> > > + VFS_I(ip)->i_flags |= S_PAR_UPDATE;
> > > mapping_set_large_folios(VFS_I(ip)->i_mapping);
> > >
> > > XFS_STATS_INC(mp, vn_active);
> > >
> > > and ran my highly sophisticated test in an XFS directory:
> > >
> > > for i in {1..70}; do ( for j in {1000..8000}; do touch $j; rm -f $j ; done ) & done
>
> I think you want something faster here, like ln to hardlink an existing
> file into the directory.
>
> > > This doesn't crash - which is a good sign.
> > > While that was going I tried
> > > while : ; do ls -l ; done
> > >
> > > it sometimes reports garbage for the stat info:
> > >
> > > total 0
> > > -????????? ? ? ? ? ? 1749
> > > -????????? ? ? ? ? ? 1764
> > > -????????? ? ? ? ? ? 1765
> > > -rw-r--r-- 1 root root 0 Feb 24 16:47 1768
> > > -rw-r--r-- 1 root root 0 Feb 24 16:47 1770
> > > -rw-r--r-- 1 root root 0 Feb 24 16:47 1772
> > > ....
> > >
> > > I *think* that is bad - probably the "garbage" that you referred to?
> > >
> > > Obviously I gets lots of
> > > ls: cannot access '1764': No such file or directory
> > > ls: cannot access '1749': No such file or directory
> > > ls: cannot access '1780': No such file or directory
> > > ls: cannot access '1765': No such file or directory
> > >
> > > but that is normal and expected when you are creating and deleting
> > > files during the ls.
> >
> > The "ls -l" output with "???" is exactly the case where the filename is
> > in readdir() but stat() on a file fails due to an unavoidable userspace
> > race between the two syscalls and the concurrent unlink(). This is
> > probably visible even without the concurrent dirops patch.
> >
> > The list of affected filenames even correlates with the reported errors:
> > 1764, 1765, 1769
> >
> > It looks like everything is working as expected.
>
> Here, yes.
>
> A problem that I saw a week or two ago with online fsck is that an evil
> thread repeatedly link()ing and unlink()ing a file into an otherwise
> empty directory while racing a thread calling readdir() in a loop will
> eventually trigger a corruption report on the directory namecheck
> because the loop in xfs_dir2_sf_getdents that uses sfp->count as a loop
> counter will race with the unlink decrementing sfp->count and run off
> the end of the inline directory data buffer.
Ah, shortform dirs might need the readdir moved inside the
lock_mode = xfs_ilock_data_map_shared(dp);
section so that the ILOCK is held while readdir is pulling the
dirents out of the inode - there's no buffer lock to serialise that
against concurrent modifications like there are for block/leaf/node
formats.
Cheers,
Dave.
--
Dave Chinner
david@...morbit.com
Powered by blists - more mailing lists