[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <e54b1d8c-ad63-4c7a-8b1b-b7c3d76446f2@redhat.com>
Date: Mon, 1 Sep 2025 10:00:28 +0200
From: David Hildenbrand <david@...hat.com>
To: Hugh Dickins <hughd@...gle.com>, Andrew Morton <akpm@...ux-foundation.org>
Cc: Will Deacon <will@...nel.org>, Shivank Garg <shivankg@....com>,
Matthew Wilcox <willy@...radead.org>, Christoph Hellwig <hch@...radead.org>,
Keir Fraser <keirf@...gle.com>, Jason Gunthorpe <jgg@...pe.ca>,
John Hubbard <jhubbard@...dia.com>, Frederick Mayle <fmayle@...gle.com>,
Peter Xu <peterx@...hat.com>, "Aneesh Kumar K.V" <aneesh.kumar@...nel.org>,
Johannes Weiner <hannes@...xchg.org>, Vlastimil Babka <vbabka@...e.cz>,
Alexander Krabler <Alexander.Krabler@...a.com>, Ge Yang
<yangge1116@....com>, Li Zhe <lizhe.67@...edance.com>,
Chris Li <chrisl@...nel.org>, Yu Zhao <yuzhao@...gle.com>,
Axel Rasmussen <axelrasmussen@...gle.com>, Yuanchu Xie <yuanchu@...gle.com>,
Wei Xu <weixugc@...gle.com>, Konstantin Khlebnikov <koct9i@...il.com>,
linux-kernel@...r.kernel.org, linux-mm@...ck.org
Subject: Re: [PATCH 2/7] mm/gup: check ref_count instead of lru before
migration
On 31.08.25 11:05, Hugh Dickins wrote:
> Will Deacon reports:-
>
> When taking a longterm GUP pin via pin_user_pages(),
> __gup_longterm_locked() tries to migrate target folios that should not
> be longterm pinned, for example because they reside in a CMA region or
> movable zone. This is done by first pinning all of the target folios
> anyway, collecting all of the longterm-unpinnable target folios into a
> list, dropping the pins that were just taken and finally handing the
> list off to migrate_pages() for the actual migration.
>
> It is critically important that no unexpected references are held on the
> folios being migrated, otherwise the migration will fail and
> pin_user_pages() will return -ENOMEM to its caller. Unfortunately, it is
> relatively easy to observe migration failures when running pKVM (which
> uses pin_user_pages() on crosvm's virtual address space to resolve
> stage-2 page faults from the guest) on a 6.15-based Pixel 6 device and
> this results in the VM terminating prematurely.
>
> In the failure case, 'crosvm' has called mlock(MLOCK_ONFAULT) on its
> mapping of guest memory prior to the pinning. Subsequently, when
> pin_user_pages() walks the page-table, the relevant 'pte' is not
> present and so the faulting logic allocates a new folio, mlocks it
> with mlock_folio() and maps it in the page-table.
>
> Since commit 2fbb0c10d1e8 ("mm/munlock: mlock_page() munlock_page()
> batch by pagevec"), mlock/munlock operations on a folio (formerly page),
> are deferred. For example, mlock_folio() takes an additional reference
> on the target folio before placing it into a per-cpu 'folio_batch' for
> later processing by mlock_folio_batch(), which drops the refcount once
> the operation is complete. Processing of the batches is coupled with
> the LRU batch logic and can be forcefully drained with
> lru_add_drain_all() but as long as a folio remains unprocessed on the
> batch, its refcount will be elevated.
>
> This deferred batching therefore interacts poorly with the pKVM pinning
> scenario as we can find ourselves in a situation where the migration
> code fails to migrate a folio due to the elevated refcount from the
> pending mlock operation.
>
> Hugh Dickins adds:-
>
> !folio_test_lru() has never been a very reliable way to tell if an
> lru_add_drain_all() is worth calling, to remove LRU cache references
> to make the folio migratable: the LRU flag may be set even while the
> folio is held with an extra reference in a per-CPU LRU cache.
>
> 5.18 commit 2fbb0c10d1e8 may have made it more unreliable. Then 6.11
> commit 33dfe9204f29 ("mm/gup: clear the LRU flag of a page before adding
> to LRU batch") tried to make it reliable, by moving LRU flag clearing;
> but missed the mlock/munlock batches, so still unreliable as reported.
>
> And it turns out to be difficult to extend 33dfe9204f29's LRU flag
> clearing to the mlock/munlock batches: if they do benefit from batching,
> mlock/munlock cannot be so effective when easily suppressed while !LRU.
>
> Instead, switch to an expected ref_count check, which was more reliable
> all along: some more false positives (unhelpful drains) than before, and
> never a guarantee that the folio will prove migratable, but better.
>
> Note for stable backports: requires 6.16 commit 86ebd50224c0 ("mm:
> add folio_expected_ref_count() for reference count calculation") and
> 6.17 commit ("mm: fix folio_expected_ref_count() when PG_private_2").
>
> Reported-by: Will Deacon <will@...nel.org>
> Link: https://lore.kernel.org/linux-mm/20250815101858.24352-1-will@kernel.org/
> Fixes: 9a4e9f3b2d73 ("mm: update get_user_pages_longterm to migrate pages allocated from CMA region")
> Signed-off-by: Hugh Dickins <hughd@...gle.com>
> Cc: <stable@...r.kernel.org>
> ---
> mm/gup.c | 3 ++-
> 1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/mm/gup.c b/mm/gup.c
> index adffe663594d..82aec6443c0a 100644
> --- a/mm/gup.c
> +++ b/mm/gup.c
> @@ -2307,7 +2307,8 @@ static unsigned long collect_longterm_unpinnable_folios(
> continue;
> }
>
> - if (!folio_test_lru(folio) && drain_allow) {
> + if (drain_allow && folio_ref_count(folio) !=
> + folio_expected_ref_count(folio) + 1) {
> lru_add_drain_all();
> drain_allow = false;
> }
In general, to the fix idea
Acked-by: David Hildenbrand <david@...hat.com>
But as raised in reply to patch #1, we have to be a bit careful about
including private_2 in folio_expected_ref_count() at this point.
If we cannot include it in folio_expected_ref_count(), it's all going to
be a mess until PG_private_2 is removed for good.
So that part still needs to be figured out.
--
Cheers
David / dhildenb
Powered by blists - more mailing lists