[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230913125113.313322-4-david@redhat.com>
Date: Wed, 13 Sep 2023 14:51:10 +0200
From: David Hildenbrand <david@...hat.com>
To: linux-kernel@...r.kernel.org
Cc: linux-mm@...ck.org, David Hildenbrand <david@...hat.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Mike Kravetz <mike.kravetz@...cle.com>,
Muchun Song <muchun.song@...ux.dev>
Subject: [PATCH v1 3/6] mm/rmap: move folio_test_anon() check out of __folio_set_anon()
Let's handle it in the caller; no need for the "first" check based on
the mapcount.
We really only end up with !anon pages in page_add_anon_rmap() via
do_swap_page(), where we hold the folio lock. So races are not possible.
Add a VM_WARN_ON_FOLIO() to make sure that we really hold the folio
lock.
In the future, we might want to let do_swap_page() use
folio_add_new_anon_rmap() on new pages instead: however, we might have
to pass then whether the folio is exclusive or not. So keep it in there
for now.
For hugetlb we never expect to have a non-anon page in
hugepage_add_anon_rmap(). Remove that code, along with some other checks
that are either not required or were checked in
hugepage_add_new_anon_rmap() already.
Signed-off-by: David Hildenbrand <david@...hat.com>
---
mm/rmap.c | 23 ++++++++---------------
1 file changed, 8 insertions(+), 15 deletions(-)
diff --git a/mm/rmap.c b/mm/rmap.c
index ab16baa0fcb3..1ac5bd1b8169 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -1135,9 +1135,6 @@ static void __folio_set_anon(struct folio *folio, struct vm_area_struct *vma,
BUG_ON(!anon_vma);
- if (folio_test_anon(folio))
- return;
-
/*
* If the folio isn't exclusive to this vma, we must use the _oldest_
* possible anon_vma for the folio mapping!
@@ -1239,12 +1236,12 @@ void page_add_anon_rmap(struct page *page, struct vm_area_struct *vma,
if (nr)
__lruvec_stat_mod_folio(folio, NR_ANON_MAPPED, nr);
- if (likely(!folio_test_ksm(folio))) {
- if (first)
- __folio_set_anon(folio, vma, address,
- !!(flags & RMAP_EXCLUSIVE));
- else
- __page_check_anon_rmap(folio, page, vma, address);
+ if (unlikely(!folio_test_anon(folio))) {
+ VM_WARN_ON_FOLIO(!folio_test_locked(folio), folio);
+ __folio_set_anon(folio, vma, address,
+ !!(flags & RMAP_EXCLUSIVE));
+ } else if (likely(!folio_test_ksm(folio))) {
+ __page_check_anon_rmap(folio, page, vma, address);
}
if (flags & RMAP_EXCLUSIVE)
SetPageAnonExclusive(page);
@@ -2528,17 +2525,13 @@ void hugepage_add_anon_rmap(struct page *page, struct vm_area_struct *vma,
unsigned long address, rmap_t flags)
{
struct folio *folio = page_folio(page);
- struct anon_vma *anon_vma = vma->anon_vma;
int first;
- BUG_ON(!folio_test_locked(folio));
- BUG_ON(!anon_vma);
+ VM_WARN_ON_FOLIO(!folio_test_anon(folio), folio);
+
first = atomic_inc_and_test(&folio->_entire_mapcount);
VM_BUG_ON_PAGE(!first && (flags & RMAP_EXCLUSIVE), page);
VM_BUG_ON_PAGE(!first && PageAnonExclusive(page), page);
- if (first)
- __folio_set_anon(folio, vma, address,
- !!(flags & RMAP_EXCLUSIVE));
if (flags & RMAP_EXCLUSIVE)
SetPageAnonExclusive(page);
}
--
2.41.0
Powered by blists - more mailing lists