[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAHk-=whbD0zwn-0RMNdgAw-8wjVJFQh4o_hGqffazAiW7DwXSQ@mail.gmail.com>
Date: Mon, 23 Sep 2024 19:26:31 -0700
From: Linus Torvalds <torvalds@...ux-foundation.org>
To: Dave Chinner <david@...morbit.com>
Cc: Kent Overstreet <kent.overstreet@...ux.dev>, linux-bcachefs@...r.kernel.org,
linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org,
Dave Chinner <dchinner@...hat.com>
Subject: Re: [GIT PULL] bcachefs changes for 6.12-rc1
On Mon, 23 Sept 2024 at 17:27, Dave Chinner <david@...morbit.com> wrote:
>
> However, the problematic workload is cold cache operations where
> the dentry cache repeatedly misses. This places all the operational
> concurrency directly on the inode hash as new inodes are inserted
> into the hash. Add memory reclaim and that adds contention as it
> removes inodes from the hash on eviction.
Yeah, and then we spend all the time just adding the inodes to the
hashes, and probably fairly seldom use them. Oh well.
And I had missed the issue with PREEMPT_RT and the fact that right now
the inode hash lock is outside the inode lock, which is problematic.
So it's all a bit nasty.
But I also assume most of the bad issues end up mainly showing up on
just fairly synthetic benchmarks with ramdisks, because even with a
good SSD I suspect the IO for the cold cache would still dominate?
Linus
Powered by blists - more mailing lists