[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230913125113.313322-5-david@redhat.com>
Date: Wed, 13 Sep 2023 14:51:11 +0200
From: David Hildenbrand <david@...hat.com>
To: linux-kernel@...r.kernel.org
Cc: linux-mm@...ck.org, David Hildenbrand <david@...hat.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Mike Kravetz <mike.kravetz@...cle.com>,
Muchun Song <muchun.song@...ux.dev>
Subject: [PATCH v1 4/6] mm/rmap: warn on new PTE-mapped folios in page_add_anon_rmap()
If swapin code would ever decide to not use order-0 pages and supply a
PTE-mapped large folio, we will have to change how we call
__folio_set_anon() -- eventually with exclusive=false and an adjusted
address. For now, let's add a VM_WARN_ON_FOLIO() with a comment about the
situation.
Signed-off-by: David Hildenbrand <david@...hat.com>
---
mm/rmap.c | 7 +++++++
1 file changed, 7 insertions(+)
diff --git a/mm/rmap.c b/mm/rmap.c
index 1ac5bd1b8169..489c142d073b 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -1238,6 +1238,13 @@ void page_add_anon_rmap(struct page *page, struct vm_area_struct *vma,
if (unlikely(!folio_test_anon(folio))) {
VM_WARN_ON_FOLIO(!folio_test_locked(folio), folio);
+ /*
+ * For a PTE-mapped large folio, we only know that the single
+ * PTE is exclusive. Further, __folio_set_anon() might not get
+ * folio->index right when not given the address of the head
+ * page.
+ */
+ VM_WARN_ON_FOLIO(folio_test_large(folio) && !compound, folio);
__folio_set_anon(folio, vma, address,
!!(flags & RMAP_EXCLUSIVE));
} else if (likely(!folio_test_ksm(folio))) {
--
2.41.0
Powered by blists - more mailing lists