[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20231019155958.7ek7oyljs6y44ah7@f>
Date: Thu, 19 Oct 2023 17:59:58 +0200
From: Mateusz Guzik <mjguzik@...il.com>
To: Christian Brauner <brauner@...nel.org>
Cc: Dave Chinner <dchinner@...hat.com>, linux-kernel@...r.kernel.org,
linux-fsdevel@...r.kernel.org, linux-bcachefs@...r.kernel.org,
Kent Overstreet <kent.overstreet@...ux.dev>,
Alexander Viro <viro@...iv.linux.org.uk>
Subject: Re: (subset) [PATCH 22/32] vfs: inode cache conversion to hash-bl
On Thu, Oct 19, 2023 at 05:30:40PM +0200, Mateusz Guzik wrote:
> On Tue, May 23, 2023 at 11:28:38AM +0200, Christian Brauner wrote:
> > On Tue, 09 May 2023 12:56:47 -0400, Kent Overstreet wrote:
> > > Because scalability of the global inode_hash_lock really, really
> > > sucks.
> > >
> > > 32-way concurrent create on a couple of different filesystems
> > > before:
> > >
> > > - 52.13% 0.04% [kernel] [k] ext4_create
> > > - 52.09% ext4_create
> > > - 41.03% __ext4_new_inode
> > > - 29.92% insert_inode_locked
> > > - 25.35% _raw_spin_lock
> > > - do_raw_spin_lock
> > > - 24.97% __pv_queued_spin_lock_slowpath
> > >
> > > [...]
> >
> > This is interesting completely independent of bcachefs so we should give
> > it some testing.
> >
> > I updated a few places that had outdated comments.
> >
> > ---
> >
> > Applied to the vfs.unstable.inode-hash branch of the vfs/vfs.git tree.
> > Patches in the vfs.unstable.inode-hash branch should appear in linux-next soon.
> >
> > Please report any outstanding bugs that were missed during review in a
> > new review to the original patch series allowing us to drop it.
> >
> > It's encouraged to provide Acked-bys and Reviewed-bys even though the
> > patch has now been applied. If possible patch trailers will be updated.
> >
> > tree: https://git.kernel.org/pub/scm/linux/kernel/git/vfs/vfs.git
> > branch: vfs.unstable.inode-hash
> >
> > [22/32] vfs: inode cache conversion to hash-bl
> > https://git.kernel.org/vfs/vfs/c/e3e92d47e6b1
>
> What, if anything, is blocking this? It is over 5 months now, I don't
> see it in master nor -next.
>
> To be clear there is no urgency as far as I'm concerned, but I did run
> into something which is primarily bottlenecked by inode hash lock and
> looks like the above should sort it out.
>
> Looks like the patch was simply forgotten.
>
> tl;dr can this land in -next please
In case you can't be arsed, here is something funny which may convince
you to expedite. ;)
I did some benching by running 20 processes in parallel, each doing stat
on a tree of 1 million files (one tree per proc, 1000 dirs x 1000 files,
so 20 mln inodes in total). Box had 24 cores and 24G RAM.
Best times:
Linux: 7.60s user 1306.90s system 1863% cpu 1:10.55 total
FreeBSD: 3.49s user 345.12s system 1983% cpu 17.573 total
OpenBSD: 5.01s user 6463.66s system 2000% cpu 5:23.42 total
DragonflyBSD: 11.73s user 1316.76s system 1023% cpu 2:09.78 total
OmniosCE: 9.17s user 516.53s system 1550% cpu 33.905 total
NetBSD failed to complete the run, OOM-killing workers:
http://mail-index.netbsd.org/tech-kern/2023/10/19/msg029242.html
OpenBSD is shafted by a big kernel lock, so no surprise it takes a long
time.
So what I find funny is that Linux needed more time than OmniosCE (an
Illumos variant, fork of Solaris).
It also needed more time than FreeBSD, which is not necessarily funny
but not that great either.
All systems were mostly busy contending on locks and in particular Linux
was almost exclusively busy waiting on inode hash lock.
Powered by blists - more mailing lists