[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAC2o3DKNc=sL2n8291Dpiyb0bRHaX=nd33ogvO_LkJqpBj-YmA@mail.gmail.com>
Date: Mon, 19 Apr 2021 15:56:06 +0800
From: Fox Chen <foxhlchen@...il.com>
To: Ian Kent <raven@...maw.net>
Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
Tejun Heo <tj@...nel.org>,
Brice Goglin <brice.goglin@...il.com>,
Rick Lindsley <ricklind@...ux.vnet.ibm.com>,
Al Viro <viro@...iv.linux.org.uk>,
Miklos Szeredi <miklos@...redi.hu>,
David Howells <dhowells@...hat.com>,
Eric Sandeen <sandeen@...deen.net>,
Kernel Mailing List <linux-kernel@...r.kernel.org>,
linux-fsdevel <linux-fsdevel@...r.kernel.org>
Subject: Re: [PATCH v3 0/4] kernfs: proposed locking and concurrency improvement
On Fri, Apr 9, 2021 at 9:14 AM Ian Kent <raven@...maw.net> wrote:
>
> There have been a few instances of contention on the kernfs_mutex during
> path walks, a case on very large IBM systems seen by myself, a report by
> Brice Goglin and followed up by Fox Chen, and I've since seen a couple
> of other reports by CoreOS users.
>
> The common thread is a large number of kernfs path walks leading to
> slowness of path walks due to kernfs_mutex contention.
>
> The problem being that changes to the VFS over some time have increased
> it's concurrency capabilities to an extent that kernfs's use of a mutex
> is no longer appropriate. There's also an issue of walks for non-existent
> paths causing contention if there are quite a few of them which is a less
> common problem.
>
> This patch series is relatively straight forward.
>
> All it does is add the ability to take advantage of VFS negative dentry
> caching to avoid needless dentry alloc/free cycles for lookups of paths
> that don't exit and change the kernfs_mutex to a read/write semaphore.
>
> The patch that tried to stay in VFS rcu-walk mode during path walks has
> been dropped for two reasons. First, it doesn't actually give very much
> improvement and, second, if there's a place where mistakes could go
> unnoticed it would be in that path. This makes the patch series simpler
> to review and reduces the likelihood of problems going unnoticed and
> popping up later.
>
> The patch to use a revision to identify if a directory has changed has
> also been dropped. If the directory has changed the dentry revision
> needs to be updated to avoid subsequent rb tree searches and after
> changing to use a read/write semaphore the update also requires a lock.
> But the d_lock is the only lock available at this point which might
> itself be contended.
>
> Changes since v2:
> - actually fix the inode attribute update locking.
> - drop the patch that tried to stay in rcu-walk mode.
> - drop the use a revision to identify if a directory has changed patch.
>
> Changes since v1:
> - fix locking in .permission() and .getattr() by re-factoring the attribute
> handling code.
>
> ---
>
> Ian Kent (4):
> kernfs: move revalidate to be near lookup
> kernfs: use VFS negative dentry caching
> kernfs: switch kernfs to use an rwsem
> kernfs: use i_lock to protect concurrent inode updates
>
>
> fs/kernfs/dir.c | 240 +++++++++++++++++++++++--------------------
> fs/kernfs/file.c | 4 -
> fs/kernfs/inode.c | 18 ++-
> fs/kernfs/kernfs-internal.h | 5 +
> fs/kernfs/mount.c | 12 +-
> fs/kernfs/symlink.c | 4 -
> include/linux/kernfs.h | 2
> 7 files changed, 155 insertions(+), 130 deletions(-)
>
> --
>
Hi Ian,
I tested this patchset with my
benchmark(https://github.com/foxhlchen/sysfs_benchmark) on a 96 CPUs
(aws c5) machine.
The result was promising:
Before, one open+read+close cycle took 500us without much variation.
With this patch, the fastest one only takes 30us, though the slowest
is still around 100us(due to the spinlock). perf report shows no more
significant mutex contention.
FYR, I put outputs in the attachment.
thanks,
fox
Download attachment "result.after" of type "application/octet-stream" (4927 bytes)
Download attachment "result.before" of type "application/octet-stream" (5055 bytes)
Download attachment "perf_report" of type "application/octet-stream" (230766 bytes)
Powered by blists - more mailing lists