[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <8734b3ac86.fsf@linux.dev>
Date: Thu, 10 Jul 2025 15:54:17 -0700
From: Roman Gushchin <roman.gushchin@...ux.dev>
To: Andrew Morton <akpm@...ux-foundation.org>
Cc: Jan Kara <jack@...e.cz>, Matthew Wilcox <willy@...radead.org>,
linux-mm@...ck.org, linux-kernel@...r.kernel.org, Liu Shixin
<liushixin2@...wei.com>
Subject: Re: [PATCH] mm: consider disabling readahead if there are signs of
thrashing
Andrew Morton <akpm@...ux-foundation.org> writes:
> On Thu, 10 Jul 2025 12:52:32 -0700 Roman Gushchin <roman.gushchin@...ux.dev> wrote:
>
>> We've noticed in production that under a very heavy memory pressure
>> the readahead behavior becomes unstable causing spikes in memory
>> pressure and CPU contention on zone locks.
>>
>> The current mmap_miss heuristics considers minor pagefaults as a
>> good reason to decrease mmap_miss and conditionally start async
>> readahead. This creates a vicious cycle: asynchronous readahead
>> loads more pages, which in turn causes more minor pagefaults.
>> This problem is especially pronounced when multiple threads of
>> an application fault on consecutive pages of an evicted executable,
>> aggressively lowering the mmap_miss counter and preventing readahead
>> from being disabled.
>>
>> To improve the logic let's check for !uptodate and workingset
>> folios in do_async_mmap_readahead(). The presence of such pages
>> is a strong indicator of thrashing, which is also used by the
>> delay accounting code, e.g. in folio_wait_bit_common(). So instead
>> of decreasing mmap_miss and lower chances to disable readahead,
>> let's do the opposite and bump it by MMAP_LOTSAMISS / 2.
>
> Are there any testing results to share?
Nothing from the production yet, but it makes a lot of difference
to the reproducer I use (authored by Greg Thelen), which basically
runs a huge binary with 2xCPU number of threads in a very constrained
memory cgroup. Without this change the system is oscillating between
performing more or less well and being completely stuck on zone locks
contention when 256 threads are all competing for a small number of
pages. With this change the system is pretty stable once it reaches
the point with the disabled readahead.
>
> What sort of workloads might be harmed by this change?
I hope none, but maybe I miss something.
>
> We do seem to be thrashing around (heh) with these readahead
> heuristics. Lots of potential for playing whack-a-mole.
>
> Should we make the readahead code more observable? We don't seem to
> have much in the way of statistics, counters, etc. And no tracepoints,
> which is surprising.
I think it's another good mm candidate (the first being oom killer
policies, working on it) for eventual bpf-ization. For example,
I can easily see that a policy specific to a file format can make
a large difference.
In this particular case I guess we can disable readahead based
on memory psi metrics, potentially all in bpf.
Thanks
Powered by blists - more mailing lists