[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20250922204921.898740570c9a595c75814753@linux-foundation.org>
Date: Mon, 22 Sep 2025 20:49:21 -0700
From: Andrew Morton <akpm@...ux-foundation.org>
To: Aubrey Li <aubrey.li@...ux.intel.com>
Cc: Matthew Wilcox <willy@...radead.org>, Nanhai Zou <nanhai.zou@...el.com>,
Gang Deng <gang.deng@...el.com>, Tianyou Li <tianyou.li@...el.com>,
Vinicius Gomes <vinicius.gomes@...el.com>, Tim Chen
<tim.c.chen@...ux.intel.com>, Chen Yu <yu.c.chen@...el.com>,
linux-fsdevel@...r.kernel.org, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, Jan Kara <jack@...e.cz>, Roman Gushchin
<roman.gushchin@...ux.dev>
Subject: Re: [PATCH] mm/readahead: Skip fully overlapped range
On Tue, 23 Sep 2025 11:59:46 +0800 Aubrey Li <aubrey.li@...ux.intel.com> wrote:
> RocksDB sequential read benchmark under high concurrency shows severe
> lock contention. Multiple threads may issue readahead on the same file
> simultaneously, which leads to heavy contention on the xas spinlock in
> filemap_add_folio(). Perf profiling indicates 30%~60% of CPU time spent
> there.
>
> To mitigate this issue, a readahead request will be skipped if its
> range is fully covered by an ongoing readahead. This avoids redundant
> work and significantly reduces lock contention. In one-second sampling,
> contention on xas spinlock dropped from 138,314 times to 2,144 times,
> resulting in a large performance improvement in the benchmark.
>
> w/o patch w/ patch
> RocksDB-readseq (ops/sec)
> (32-threads) 1.2M 2.4M
On which kernel version? In recent times we've made a few readahead
changes to address issues with high concurrency and a quick retest on
mm.git's current mm-stable branch would be interesting please.
Powered by blists - more mailing lists