[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAC2o3D+28g67vbNOaVxuF0OfE0RjFGHVwAcA_3t1AAS_b_EnPg@mail.gmail.com>
Date: Wed, 12 May 2021 15:16:01 +0800
From: Fox Chen <foxhlchen@...il.com>
To: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
Cc: Ian Kent <raven@...maw.net>, Tejun Heo <tj@...nel.org>,
Al Viro <viro@...iv.linux.org.uk>,
Eric Sandeen <sandeen@...deen.net>,
Brice Goglin <brice.goglin@...il.com>,
Rick Lindsley <ricklind@...ux.vnet.ibm.com>,
David Howells <dhowells@...hat.com>,
Miklos Szeredi <miklos@...redi.hu>,
Marcelo Tosatti <mtosatti@...hat.com>,
linux-fsdevel <linux-fsdevel@...r.kernel.org>,
Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v4 0/5] kernfs: proposed locking and concurrency improvement
On Wed, May 12, 2021 at 2:21 PM Greg Kroah-Hartman
<gregkh@...uxfoundation.org> wrote:
>
> On Wed, May 12, 2021 at 08:38:35AM +0800, Ian Kent wrote:
> > There have been a few instances of contention on the kernfs_mutex during
> > path walks, a case on very large IBM systems seen by myself, a report by
> > Brice Goglin and followed up by Fox Chen, and I've since seen a couple
> > of other reports by CoreOS users.
> >
> > The common thread is a large number of kernfs path walks leading to
> > slowness of path walks due to kernfs_mutex contention.
> >
> > The problem being that changes to the VFS over some time have increased
> > it's concurrency capabilities to an extent that kernfs's use of a mutex
> > is no longer appropriate. There's also an issue of walks for non-existent
> > paths causing contention if there are quite a few of them which is a less
> > common problem.
> >
> > This patch series is relatively straight forward.
> >
> > All it does is add the ability to take advantage of VFS negative dentry
> > caching to avoid needless dentry alloc/free cycles for lookups of paths
> > that don't exit and change the kernfs_mutex to a read/write semaphore.
> >
> > The patch that tried to stay in VFS rcu-walk mode during path walks has
> > been dropped for two reasons. First, it doesn't actually give very much
> > improvement and, second, if there's a place where mistakes could go
> > unnoticed it would be in that path. This makes the patch series simpler
> > to review and reduces the likelihood of problems going unnoticed and
> > popping up later.
> >
> > The patch to use a revision to identify if a directory has changed has
> > also been dropped. If the directory has changed the dentry revision
> > needs to be updated to avoid subsequent rb tree searches and after
> > changing to use a read/write semaphore the update also requires a lock.
> > But the d_lock is the only lock available at this point which might
> > itself be contended.
> >
> > Changes since v3:
> > - remove unneeded indirection when referencing the super block.
> > - check if inode attribute update is actually needed.
> >
> > Changes since v2:
> > - actually fix the inode attribute update locking.
> > - drop the patch that tried to stay in rcu-walk mode.
> > - drop the use a revision to identify if a directory has changed patch.
> >
> > Changes since v1:
> > - fix locking in .permission() and .getattr() by re-factoring the attribute
> > handling code.
> > ---
> >
> > Ian Kent (5):
> > kernfs: move revalidate to be near lookup
> > kernfs: use VFS negative dentry caching
> > kernfs: switch kernfs to use an rwsem
> > kernfs: use i_lock to protect concurrent inode updates
> > kernfs: add kernfs_need_inode_refresh()
> >
> >
> > fs/kernfs/dir.c | 170 ++++++++++++++++++++----------------
> > fs/kernfs/file.c | 4 +-
> > fs/kernfs/inode.c | 45 ++++++++--
> > fs/kernfs/kernfs-internal.h | 5 +-
> > fs/kernfs/mount.c | 12 +--
> > fs/kernfs/symlink.c | 4 +-
> > include/linux/kernfs.h | 2 +-
> > 7 files changed, 147 insertions(+), 95 deletions(-)
> >
> > --
> > Ian
> >
>
> Any benchmark numbers that you ran that are better/worse with this patch
> series? That woul dbe good to know, otherwise you aren't changing
> functionality here, so why would we take these changes? :)
Let me run it on my benchmark and bring back the result to you.
> thanks,
>
> greg k-h
thanks,
fox
Powered by blists - more mailing lists