lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Sat, 26 Feb 2022 10:07:03 +1100
From:   Dave Chinner <david@...morbit.com>
To:     "Darrick J. Wong" <djwong@...nel.org>
Cc:     NeilBrown <neilb@...e.de>, Al Viro <viro@...iv.linux.org.uk>,
        Linux NFS Mailing List <linux-nfs@...r.kernel.org>,
        linux-fsdevel@...r.kernel.org, LKML <linux-kernel@...r.kernel.org>,
        Daire Byrne <daire@...g.com>,
        Andreas Dilger <adilger.kernel@...ger.ca>
Subject: Re: [PATCH/RFC] VFS: support parallel updates in the one directory.

On Wed, Feb 23, 2022 at 08:43:28PM -0800, Darrick J. Wong wrote:
> On Wed, Feb 23, 2022 at 09:45:46AM +1100, Dave Chinner wrote:
> > On Tue, Feb 22, 2022 at 01:24:50PM +1100, NeilBrown wrote:
> > > 
> > > Hi Al,
> > >  I wonder if you might find time to have a look at this patch.  It
> > >  allows concurrent updates to a single directory.  This can result in
> > >  substantial throughput improvements when the application uses multiple
> > >  threads to create lots of files in the one directory, and there is
> > >  noticeable per-create latency, as there can be with NFS to a remote
> > >  server.
> > > Thanks,
> > > NeilBrown
> > > 
> > > Some filesystems can support parallel modifications to a directory,
> > > either because the modification happen on a remote server which does its
> > > own locking (e.g.  NFS) or because they can internally lock just a part
> > > of a directory (e.g.  many local filesystems, with a bit of work - the
> > > lustre project has patches for ext4 to support concurrent updates).
> > > 
> > > To allow this, we introduce VFS support for parallel modification:
> > > unlink (including rmdir) and create.  Parallel rename is not (yet)
> > > supported.
> > 
> > Yay!
> > 
> > > If a filesystem supports parallel modification in a given directory, it
> > > sets S_PAR_UNLINK on the inode for that directory.  lookup_open() and
> > > the new lookup_hash_modify() (similar to __lookup_hash()) notice the
> > > flag and take a shared lock on the directory, and rely on a lock-bit in
> > > d_flags, much like parallel lookup relies on DCACHE_PAR_LOOKUP.
> > 
> > I suspect that you could enable this for XFS right now. XFS has internal
> > directory inode locking that should serialise all reads and writes
> > correctly regardless of what the VFS does. So while the VFS might
> > use concurrent updates (e.g. inode_lock_shared() instead of
> > inode_lock() on the dir inode), XFS has an internal metadata lock
> > that will then serialise the concurrent VFS directory modifications
> > correctly....
> 
> I don't think that will work because xfs_readdir doesn't hold the
> directory ILOCK while it runs, which means that readdir will see garbage
> if other threads now only hold inode_lock_shared while they update the
> directory.

It repeated picks up and drops the ILOCK as it maps buffers. IOWs,
the ILOCK serialises the lookup of the buffer at the next offset in
the readdir process and then reads the data out while the buffer is
locked. Hence we'll always serialise the buffer lookup and read
against concurrent modifications so we'll always get the next
directory buffer in ascending offset order. We then hold the buffer locked
while we read all the dirents out of it into the user buffer, so
that's also serialised against concurrent modifications.

Also, remember that readdir does not guarantee that it returns all
entries in the face of concurrent modifications that remove entries.
Because the offset of dirents never changes in the XFS data segment,
the only time we might skip an entry is when it has been removed
and it was the last entry in a data block so the entire data block
goes away between readdir buffer lookups. In that case, we just get
the next highest offset buffer returned, and we continue onwards.

If a hole is filled while we are walking, then we'll see the buffer
that was added into the hole. That new buffer is now at the next
highest offset, so readdir finding it is correct and valid
behaviour...

Cheers,

Dave.
-- 
Dave Chinner
david@...morbit.com

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ