[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <66f2751f-283e-816d-9530-765db7edc465@google.com>
Date: Mon, 8 Sep 2025 15:16:53 -0700 (PDT)
From: Hugh Dickins <hughd@...gle.com>
To: Andrew Morton <akpm@...ux-foundation.org>
cc: Alexander Krabler <Alexander.Krabler@...a.com>,
"Aneesh Kumar K.V" <aneesh.kumar@...nel.org>,
Axel Rasmussen <axelrasmussen@...gle.com>, Chris Li <chrisl@...nel.org>,
Christoph Hellwig <hch@...radead.org>,
David Hildenbrand <david@...hat.com>, Frederick Mayle <fmayle@...gle.com>,
Jason Gunthorpe <jgg@...pe.ca>, Johannes Weiner <hannes@...xchg.org>,
John Hubbard <jhubbard@...dia.com>, Keir Fraser <keirf@...gle.com>,
Konstantin Khlebnikov <koct9i@...il.com>, Li Zhe <lizhe.67@...edance.com>,
Matthew Wilcox <willy@...radead.org>, Peter Xu <peterx@...hat.com>,
Rik van Riel <riel@...riel.com>, Shivank Garg <shivankg@....com>,
Vlastimil Babka <vbabka@...e.cz>, Wei Xu <weixugc@...gle.com>,
Will Deacon <will@...nel.org>, yangge <yangge1116@....com>,
Yuanchu Xie <yuanchu@...gle.com>, Yu Zhao <yuzhao@...gle.com>,
linux-kernel@...r.kernel.org, linux-mm@...ck.org
Subject: [PATCH v2 2/6] mm/gup: local lru_add_drain() to avoid
lru_add_drain_all()
In many cases, if collect_longterm_unpinnable_folios() does need to
drain the LRU cache to release a reference, the cache in question is
on this same CPU, and much more efficiently drained by a preliminary
local lru_add_drain(), than the later cross-CPU lru_add_drain_all().
Marked for stable, to counter the increase in lru_add_drain_all()s
from "mm/gup: check ref_count instead of lru before migration".
Note for clean backports: can take 6.16 commit a03db236aebf ("gup:
optimize longterm pin_user_pages() for large folio") first.
Signed-off-by: Hugh Dickins <hughd@...gle.com>
Cc: <stable@...r.kernel.org>
---
mm/gup.c | 15 +++++++++++----
1 file changed, 11 insertions(+), 4 deletions(-)
diff --git a/mm/gup.c b/mm/gup.c
index 82aec6443c0a..b47066a54f52 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -2287,8 +2287,8 @@ static unsigned long collect_longterm_unpinnable_folios(
struct pages_or_folios *pofs)
{
unsigned long collected = 0;
- bool drain_allow = true;
struct folio *folio;
+ int drained = 0;
long i = 0;
for (folio = pofs_get_folio(pofs, i); folio;
@@ -2307,10 +2307,17 @@ static unsigned long collect_longterm_unpinnable_folios(
continue;
}
- if (drain_allow && folio_ref_count(folio) !=
- folio_expected_ref_count(folio) + 1) {
+ if (drained == 0 &&
+ folio_ref_count(folio) !=
+ folio_expected_ref_count(folio) + 1) {
+ lru_add_drain();
+ drained = 1;
+ }
+ if (drained == 1 &&
+ folio_ref_count(folio) !=
+ folio_expected_ref_count(folio) + 1) {
lru_add_drain_all();
- drain_allow = false;
+ drained = 2;
}
if (!folio_isolate_lru(folio))
--
2.51.0
Powered by blists - more mailing lists