[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20220120160822.666778608@infradead.org>
Date: Thu, 20 Jan 2022 16:55:18 +0100
From: Peter Zijlstra <peterz@...radead.org>
To: mingo@...hat.com, tglx@...utronix.de, juri.lelli@...hat.com,
vincent.guittot@...aro.org, dietmar.eggemann@....com,
rostedt@...dmis.org, bsegall@...gle.com, mgorman@...e.de,
bristot@...hat.com
Cc: linux-kernel@...r.kernel.org, linux-mm@...ck.org,
linux-api@...r.kernel.org, x86@...nel.org, peterz@...radead.org,
pjt@...gle.com, posk@...gle.com, avagin@...gle.com,
jannh@...gle.com, tdelisle@...terloo.ca, mark.rutland@....com,
posk@...k.io
Subject: [RFC][PATCH v2 1/5] mm: Avoid unmapping pinned pages
Add a guarantee for Anon pages that pin_user_page*() ensures the
user-mapping of these pages stay preserved. In order to ensure this
all rmap users have been audited:
vmscan: already fails eviction due to page_maybe_dma_pinned()
migrate: migration will fail on pinned pages due to
expected_page_refs() not matching, however that is
*after* try_to_migrate() has already destroyed the
user mapping of these pages. Add an early exit for
this case.
numa-balance: as per the above, pinned pages cannot be migrated,
however numa balancing scanning will happily PROT_NONE
them to get usage information on these pages. Avoid
this for pinned pages.
None of the other rmap users (damon,page-idle,mlock,..) unmap the
page, they mostly just muck about with reference,dirty flags etc.
This same guarantee cannot be provided for Shared (file) pages due to
dirty page tracking.
Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
---
mm/migrate.c | 10 +++++++++-
mm/mprotect.c | 6 ++++++
2 files changed, 15 insertions(+), 1 deletion(-)
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -1472,7 +1472,15 @@ int migrate_pages(struct list_head *from
nr_subpages = thp_nr_pages(page);
cond_resched();
- if (PageHuge(page))
+ /*
+ * If the page has a pin then expected_page_refs() will
+ * not match and the whole migration will fail later
+ * anyway, fail early and preserve the mappings.
+ */
+ if (page_maybe_dma_pinned(page))
+ rc = -EAGAIN;
+
+ else if (PageHuge(page))
rc = unmap_and_move_huge_page(get_new_page,
put_new_page, private, page,
pass > 2, mode, reason,
--- a/mm/mprotect.c
+++ b/mm/mprotect.c
@@ -106,6 +106,12 @@ static unsigned long change_pte_range(st
continue;
/*
+ * Can't migrate pinned pages, avoid touching them.
+ */
+ if (page_maybe_dma_pinned(page))
+ continue;
+
+ /*
* Don't mess with PTEs if page is already on the node
* a single-threaded process is running on.
*/
Powered by blists - more mailing lists