[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <29a018c9d7dce71be8321c4f8a129a2880cf3348.camel@themaw.net>
Date: Mon, 19 Apr 2021 20:25:20 +0800
From: Ian Kent <raven@...maw.net>
To: Fox Chen <foxhlchen@...il.com>
Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
Tejun Heo <tj@...nel.org>,
Brice Goglin <brice.goglin@...il.com>,
Rick Lindsley <ricklind@...ux.vnet.ibm.com>,
Al Viro <viro@...iv.linux.org.uk>,
Miklos Szeredi <miklos@...redi.hu>,
David Howells <dhowells@...hat.com>,
Eric Sandeen <sandeen@...deen.net>,
Kernel Mailing List <linux-kernel@...r.kernel.org>,
linux-fsdevel <linux-fsdevel@...r.kernel.org>
Subject: Re: [PATCH v3 0/4] kernfs: proposed locking and concurrency
improvement
On Mon, 2021-04-19 at 15:56 +0800, Fox Chen wrote:
> On Fri, Apr 9, 2021 at 9:14 AM Ian Kent <raven@...maw.net> wrote:
> > There have been a few instances of contention on the kernfs_mutex
> > during
> > path walks, a case on very large IBM systems seen by myself, a
> > report by
> > Brice Goglin and followed up by Fox Chen, and I've since seen a
> > couple
> > of other reports by CoreOS users.
> >
> > The common thread is a large number of kernfs path walks leading to
> > slowness of path walks due to kernfs_mutex contention.
> >
> > The problem being that changes to the VFS over some time have
> > increased
> > it's concurrency capabilities to an extent that kernfs's use of a
> > mutex
> > is no longer appropriate. There's also an issue of walks for non-
> > existent
> > paths causing contention if there are quite a few of them which is
> > a less
> > common problem.
> >
> > This patch series is relatively straight forward.
> >
> > All it does is add the ability to take advantage of VFS negative
> > dentry
> > caching to avoid needless dentry alloc/free cycles for lookups of
> > paths
> > that don't exit and change the kernfs_mutex to a read/write
> > semaphore.
> >
> > The patch that tried to stay in VFS rcu-walk mode during path walks
> > has
> > been dropped for two reasons. First, it doesn't actually give very
> > much
> > improvement and, second, if there's a place where mistakes could go
> > unnoticed it would be in that path. This makes the patch series
> > simpler
> > to review and reduces the likelihood of problems going unnoticed
> > and
> > popping up later.
> >
> > The patch to use a revision to identify if a directory has changed
> > has
> > also been dropped. If the directory has changed the dentry revision
> > needs to be updated to avoid subsequent rb tree searches and after
> > changing to use a read/write semaphore the update also requires a
> > lock.
> > But the d_lock is the only lock available at this point which might
> > itself be contended.
> >
> > Changes since v2:
> > - actually fix the inode attribute update locking.
> > - drop the patch that tried to stay in rcu-walk mode.
> > - drop the use a revision to identify if a directory has changed
> > patch.
> >
> > Changes since v1:
> > - fix locking in .permission() and .getattr() by re-factoring the
> > attribute
> > handling code.
> >
> > ---
> >
> > Ian Kent (4):
> > kernfs: move revalidate to be near lookup
> > kernfs: use VFS negative dentry caching
> > kernfs: switch kernfs to use an rwsem
> > kernfs: use i_lock to protect concurrent inode updates
> >
> >
> > fs/kernfs/dir.c | 240 +++++++++++++++++++++++------
> > --------------
> > fs/kernfs/file.c | 4 -
> > fs/kernfs/inode.c | 18 ++-
> > fs/kernfs/kernfs-internal.h | 5 +
> > fs/kernfs/mount.c | 12 +-
> > fs/kernfs/symlink.c | 4 -
> > include/linux/kernfs.h | 2
> > 7 files changed, 155 insertions(+), 130 deletions(-)
> >
> > --
> >
>
> Hi Ian,
>
> I tested this patchset with my
> benchmark(https://github.com/foxhlchen/sysfs_benchmark) on a 96 CPUs
> (aws c5) machine.
>
> The result was promising:
> Before, one open+read+close cycle took 500us without much variation.
> With this patch, the fastest one only takes 30us, though the slowest
> is still around 100us(due to the spinlock). perf report shows no more
> significant mutex contention.
Thanks for this Fox.
I'll have a look through the data a bit later.
For now, I'd like to keep the series as simple as possible.
But there shouldn't be a problem reading and comparing those
attributes between the kernfs node and the inode without taking
the additional lock. So a check could be done and the lock only
taken if an update is needed.
That may well improve that worst case quite a bit, but as I say,
it would need to be a follow up change.
Ian
Powered by blists - more mailing lists