[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20200102030421.30799-1-richardw.yang@linux.intel.com>
Date: Thu, 2 Jan 2020 11:04:21 +0800
From: Wei Yang <richardw.yang@...ux.intel.com>
To: akpm@...ux-foundation.org, kirill.shutemov@...ux.intel.com
Cc: linux-mm@...ck.org, linux-kernel@...r.kernel.org,
richard.weiyang@...il.com, Wei Yang <richardw.yang@...ux.intel.com>
Subject: [RFC PATCH] mm/rmap.c: finer hwpoison granularity for PTE-mapped THP
Currently we behave differently between PMD-mapped THP and PTE-mapped
THP on memory_failure.
User detected difference:
For PTE-mapped THP, the whole 2M range will trigger MCE after
memory_failure(), while only 4K range for PMD-mapped THP will.
Direct reason:
All the 512 PTE entry will be marked as hwpoison entry for a PTE-mapped
THP while only one PTE will be marked for a PMD-mapped THP.
Root reason:
The root cause is PTE-mapped page doesn't need to split pmd which skip
the SPLIT_FREEZE process. This makes try_to_unmap_one() do its job when
the THP is not splited. And since page is HWPOISON, all the entries in
THP is marked as hwpoison entry.
While for the PMD-mapped THP, SPLIT_FREEZE will save migration entry to
pte and this skip try_to_unmap_one() before THP splited. And then only
the affected 4k page is marked as hwpoison entry.
This patch tries to provide a finer granularity for PTE-mapped THP by
only mark the affected subpage as hwpoison entry when THP is not
split.
Signed-off-by: Wei Yang <richardw.yang@...ux.intel.com>
---
This complicates the picture a little, while I don't find a better way to
improve.
Also I may miss some case or not handle this properly.
Look forward your comments.
---
mm/rmap.c | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/mm/rmap.c b/mm/rmap.c
index b3e381919835..90229917dd64 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -1554,10 +1554,11 @@ static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma,
set_huge_swap_pte_at(mm, address,
pvmw.pte, pteval,
vma_mmu_pagesize(vma));
- } else {
+ } else if (!PageAnon(page) || page == subpage) {
dec_mm_counter(mm, mm_counter(page));
set_pte_at(mm, address, pvmw.pte, pteval);
- }
+ } else
+ goto freeze;
} else if (pte_unused(pteval) && !userfaultfd_armed(vma)) {
/*
@@ -1579,6 +1580,7 @@ static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma,
swp_entry_t entry;
pte_t swp_pte;
+freeze:
if (arch_unmap_one(mm, vma, address, pteval) < 0) {
set_pte_at(mm, address, pvmw.pte, pteval);
ret = false;
--
2.17.1
Powered by blists - more mailing lists