[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAGsJ_4wnwAet4svDrxT4sTdp24sweAU-2VyYn3iNPOoaKdXxPw@mail.gmail.com>
Date: Sun, 30 Nov 2025 10:56:20 +0800
From: Barry Song <21cnbao@...il.com>
To: Suren Baghdasaryan <surenb@...gle.com>
Cc: Matthew Wilcox <willy@...radead.org>, akpm@...ux-foundation.org, linux-mm@...ck.org,
linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org,
loongarch@...ts.linux.dev, linuxppc-dev@...ts.ozlabs.org,
linux-riscv@...ts.infradead.org, linux-s390@...r.kernel.org,
linux-fsdevel@...r.kernel.org
Subject: Re: [RFC PATCH 0/2] mm: continue using per-VMA lock when retrying
page faults after I/O
On Sun, Nov 30, 2025 at 8:28 AM Suren Baghdasaryan <surenb@...gle.com> wrote:
>
> On Thu, Nov 27, 2025 at 2:29 PM Barry Song <21cnbao@...il.com> wrote:
> >
> > On Fri, Nov 28, 2025 at 3:43 AM Matthew Wilcox <willy@...radead.org> wrote:
> > >
> > > [dropping individuals, leaving only mailing lists. please don't send
> > > this kind of thing to so many people in future]
> > >
> > > On Thu, Nov 27, 2025 at 12:22:16PM +0800, Barry Song wrote:
> > > > On Thu, Nov 27, 2025 at 12:09 PM Matthew Wilcox <willy@...radead.org> wrote:
> > > > >
> > > > > On Thu, Nov 27, 2025 at 09:14:36AM +0800, Barry Song wrote:
> > > > > > There is no need to always fall back to mmap_lock if the per-VMA
> > > > > > lock was released only to wait for pagecache or swapcache to
> > > > > > become ready.
> > > > >
> > > > > Something I've been wondering about is removing all the "drop the MM
> > > > > locks while we wait for I/O" gunk. It's a nice amount of code removed:
> > > >
> > > > I think the point is that page fault handlers should avoid holding the VMA
> > > > lock or mmap_lock for too long while waiting for I/O. Otherwise, those
> > > > writers and readers will be stuck for a while.
> > >
> > > There's a usecase some of us have been discussing off-list for a few
> > > weeks that our current strategy pessimises. It's a process with
> > > thousands (maybe tens of thousands) of threads. It has much more mapped
> > > files than it has memory that cgroups will allow it to use. So on a
> > > page fault, we drop the vma lock, allocate a page of ram, kick off the
> > > read, sleep waiting for the folio to come uptodate, once it is return,
> > > expecting the page to still be there when we reenter filemap_fault.
> > > But it's under so much memory pressure that it's already been reclaimed
> > > by the time we get back to it. So all the threads just batter the
> > > storage re-reading data.
> >
> > Is this entirely the fault of re-entering the page fault? Under extreme
> > memory pressure, even if we map the pages, they can still be reclaimed
> > quickly?
> >
> > >
> > > If we don't drop the vma lock, we can insert the pages in the page table
> > > and return, maybe getting some work done before this thread is
> > > descheduled.
> >
> > If we need to protect the page from being reclaimed too early, the fix
> > should reside within LRU management, not in page fault handling.
> >
> > Also, I gave an example where we may not drop the VMA lock if the folio is
> > already up to date. That likely corresponds to waiting for the PTE mapping to
> > complete.
> >
> > >
> > > This use case also manages to get utterly hung-up trying to do reclaim
> > > today with the mmap_lock held. SO it manifests somewhat similarly to
> > > your problem (everybody ends up blocked on mmap_lock) but it has a
> > > rather different root cause.
> > >
> > > > I agree there’s room for improvement, but merely removing the "drop the MM
> > > > locks while waiting for I/O" code is unlikely to improve performance.
> > >
> > > I'm not sure it'd hurt performance. The "drop mmap locks for I/O" code
> > > was written before the VMA locking code was written. I don't know that
> > > it's actually helping these days.
> >
> > I am concerned that other write paths may still need to modify the VMA, for
> > example during splitting. Tail latency has long been a significant issue for
> > Android users, and we have observed it even with folio_lock, which has much
> > finer granularity than the VMA lock.
>
> Another corner case we need to consider is when there is a large VMA
> covering most of the address space, so holding a VMA lock during IO
> would resemble holding an mmap_lock, leading to the same issue that we
> faced before "drop mmap locks for I/O". We discussed this with Matthew
> in the context of the problem he mentioned (the page is reclaimed
> before page fault retry happens) with no conclusion yet.
Suren, thank you very much for your input.
Right. I think we may discover more corner cases on Android in places
where we previously saw VMA merging, such as between two native heap
mmap areas. This can happen fairly often, and we don’t want long BIO
queues to block those writers.
>
> >
> > >
> > > > The change would be much more complex, so I’d prefer to land the current
> > > > patchset first. At least this way, we avoid falling back to mmap_lock and
> > > > causing contention or priority inversion, with minimal changes.
> > >
> > > Uh, this is an RFC patchset. I'm giving you my comment, which is that I
> > > don't think this is the right direction to go in. Any talk of "landing"
> > > these patches is extremely premature.
> >
> > While I agree that there are other approaches worth exploring, I
> > remain entirely unconvinced that this patchset is the wrong
> > direction. With the current retry logic, it substantially reduces
> > mmap_lock acquisitions and represents a clear low-hanging fruit.
> >
> > Also, I am not referring to landing the RFC itself, but to a subsequent formal
> > patchset that retries using the per-VMA lock.
>
> I don't know if this direction is the right one but I agree with
> Matthew that we should consider alternatives before adopting a new
> direction. Hopefully we can find one fix for both problems rather than
> fixing each one in isolation.
As I mentioned in a follow-up reply to Matthew[1], I think the current
approach also helps in cases where pages are reclaimed during retries.
Previously, we required mmap_lock to retry, so any contention made it
hard to acquire and introduced high latency. During that time, pages
could be reclaimed before mmap_lock was obtained. Now that we only
require the per-VMA lock, retries can proceed much more easily than
before.
As long as we replace a big lock with a smaller one, there is less
chance of getting stuck in D state.
If either you or Matthew have a reproducer for this issue, I’d be
happy to try it out.
BTW, we also observed mmap_lock contention during MGLRU aging. TBH, the
non-RMAP clearing of the PTE young bit does not seem helpful on arm64,
which does not support non-leaf young bits at all. After disabling the
feature below, we found that reclamation used less CPU and ran better.
echo 1 >/sys/kernel/mm/lru_gen/enabled
0x0002 Clearing the accessed bit in leaf page table entries in large
batches, when MMU sets it (e.g., on x86). This behavior can
theoretically worsen lock contention (mmap_lock). If it is
disabled, the multi-gen LRU will suffer a minor performance
degradation for workloads that contiguously map hot pages,
whose accessed bits can be otherwise cleared by fewer larger
batches.
[1] https://lore.kernel.org/linux-mm/CAGsJ_4wvaieWtTrK+koM3SFu9rDExkVHX5eUwYiEotVqP-ndEQ@mail.gmail.com/
Thanks
Barry
Powered by blists - more mailing lists