[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20200921162041.253386412@linuxfoundation.org>
Date: Mon, 21 Sep 2020 18:28:37 +0200
From: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
To: linux-kernel@...r.kernel.org
Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
stable@...r.kernel.org, Ralph Campbell <rcampbell@...dia.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Yang Shi <shy828301@...il.com>, Zi Yan <ziy@...dia.com>,
Jerome Glisse <jglisse@...hat.com>,
John Hubbard <jhubbard@...dia.com>,
Alistair Popple <apopple@...dia.com>,
Christoph Hellwig <hch@....de>,
Jason Gunthorpe <jgg@...dia.com>,
Bharata B Rao <bharata@...ux.ibm.com>,
Ben Skeggs <bskeggs@...hat.com>, Shuah Khan <shuah@...nel.org>,
Linus Torvalds <torvalds@...ux-foundation.org>
Subject: [PATCH 5.8 105/118] mm/thp: fix __split_huge_pmd_locked() for migration PMD
From: Ralph Campbell <rcampbell@...dia.com>
commit ec0abae6dcdf7ef88607c869bf35a4b63ce1b370 upstream.
A migrating transparent huge page has to already be unmapped. Otherwise,
the page could be modified while it is being copied to a new page and data
could be lost. The function __split_huge_pmd() checks for a PMD migration
entry before calling __split_huge_pmd_locked() leading one to think that
__split_huge_pmd_locked() can handle splitting a migrating PMD.
However, the code always increments the page->_mapcount and adjusts the
memory control group accounting assuming the page is mapped.
Also, if the PMD entry is a migration PMD entry, the call to
is_huge_zero_pmd(*pmd) is incorrect because it calls pmd_pfn(pmd) instead
of migration_entry_to_pfn(pmd_to_swp_entry(pmd)). Fix these problems by
checking for a PMD migration entry.
Fixes: 84c3fc4e9c56 ("mm: thp: check pmd migration entry in common path")
Signed-off-by: Ralph Campbell <rcampbell@...dia.com>
Signed-off-by: Andrew Morton <akpm@...ux-foundation.org>
Reviewed-by: Yang Shi <shy828301@...il.com>
Reviewed-by: Zi Yan <ziy@...dia.com>
Cc: Jerome Glisse <jglisse@...hat.com>
Cc: John Hubbard <jhubbard@...dia.com>
Cc: Alistair Popple <apopple@...dia.com>
Cc: Christoph Hellwig <hch@....de>
Cc: Jason Gunthorpe <jgg@...dia.com>
Cc: Bharata B Rao <bharata@...ux.ibm.com>
Cc: Ben Skeggs <bskeggs@...hat.com>
Cc: Shuah Khan <shuah@...nel.org>
Cc: <stable@...r.kernel.org> [4.14+]
Link: https://lkml.kernel.org/r/20200903183140.19055-1-rcampbell@nvidia.com
Signed-off-by: Linus Torvalds <torvalds@...ux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
---
mm/huge_memory.c | 42 +++++++++++++++++++++++-------------------
1 file changed, 23 insertions(+), 19 deletions(-)
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -2048,7 +2048,7 @@ static void __split_huge_pmd_locked(stru
put_page(page);
add_mm_counter(mm, mm_counter_file(page), -HPAGE_PMD_NR);
return;
- } else if (is_huge_zero_pmd(*pmd)) {
+ } else if (pmd_trans_huge(*pmd) && is_huge_zero_pmd(*pmd)) {
/*
* FIXME: Do we want to invalidate secondary mmu by calling
* mmu_notifier_invalidate_range() see comments below inside
@@ -2142,30 +2142,34 @@ static void __split_huge_pmd_locked(stru
pte = pte_offset_map(&_pmd, addr);
BUG_ON(!pte_none(*pte));
set_pte_at(mm, addr, pte, entry);
- atomic_inc(&page[i]._mapcount);
- pte_unmap(pte);
- }
-
- /*
- * Set PG_double_map before dropping compound_mapcount to avoid
- * false-negative page_mapped().
- */
- if (compound_mapcount(page) > 1 && !TestSetPageDoubleMap(page)) {
- for (i = 0; i < HPAGE_PMD_NR; i++)
+ if (!pmd_migration)
atomic_inc(&page[i]._mapcount);
+ pte_unmap(pte);
}
- lock_page_memcg(page);
- if (atomic_add_negative(-1, compound_mapcount_ptr(page))) {
- /* Last compound_mapcount is gone. */
- __dec_lruvec_page_state(page, NR_ANON_THPS);
- if (TestClearPageDoubleMap(page)) {
- /* No need in mapcount reference anymore */
+ if (!pmd_migration) {
+ /*
+ * Set PG_double_map before dropping compound_mapcount to avoid
+ * false-negative page_mapped().
+ */
+ if (compound_mapcount(page) > 1 &&
+ !TestSetPageDoubleMap(page)) {
for (i = 0; i < HPAGE_PMD_NR; i++)
- atomic_dec(&page[i]._mapcount);
+ atomic_inc(&page[i]._mapcount);
+ }
+
+ lock_page_memcg(page);
+ if (atomic_add_negative(-1, compound_mapcount_ptr(page))) {
+ /* Last compound_mapcount is gone. */
+ __dec_lruvec_page_state(page, NR_ANON_THPS);
+ if (TestClearPageDoubleMap(page)) {
+ /* No need in mapcount reference anymore */
+ for (i = 0; i < HPAGE_PMD_NR; i++)
+ atomic_dec(&page[i]._mapcount);
+ }
}
+ unlock_page_memcg(page);
}
- unlock_page_memcg(page);
smp_wmb(); /* make pte visible before pmd */
pmd_populate(mm, pmd, pgtable);
Powered by blists - more mailing lists