[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4D8172D7.3040201@jp.fujitsu.com>
Date: Thu, 17 Mar 2011 11:32:55 +0900
From: Hidetoshi Seto <seto.hidetoshi@...fujitsu.com>
To: Andrea Arcangeli <aarcange@...hat.com>,
Andi Kleen <andi@...stfloor.org>
CC: Andrew Morton <akpm@...ux-foundation.org>,
Huang Ying <ying.huang@...el.com>,
Jin Dongming <jin.dongming@...css.fujitsu.com>,
linux-kernel@...r.kernel.org
Subject: [PATCH 3/4] Check whether pages are poisoned before copying
No matter whether it is one of collapsing pages or the new THP,
if the poisoned page is accessed during page copy, MCE will happen
and the system will panic.
So to avoid the above problem, add poison checks for both of 4K pages
and the THP before copying in __collapse_huge_page_copy().
If poisoned page is found, cancel page collapsing to keep the poisoned
4k page to be owned by the APL, or free poisoned THP before use it.
Signed-off-by: Hidetoshi Seto <seto.hidetoshi@...fujitsu.com>
Signed-off-by: Jin Dongming <jin.dongming@...css.fujitsu.com>
---
mm/huge_memory.c | 27 +++++++++++++++++++++++----
1 files changed, 23 insertions(+), 4 deletions(-)
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index c62176a..6345279 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -1702,20 +1702,26 @@ out:
return isolated;
}
-static void __collapse_huge_page_copy(pte_t *pte, struct page *page,
- struct vm_area_struct *vma,
- unsigned long address)
+static int __collapse_huge_page_copy(pte_t *pte, struct page *page,
+ struct vm_area_struct *vma,
+ unsigned long address)
{
pte_t *_pte;
for (_pte = pte; _pte < pte + HPAGE_PMD_NR; _pte++) {
pte_t pteval = *_pte;
struct page *src_page;
+ if (PageHWPoison(page))
+ return 0;
+
if (pte_none(pteval)) {
clear_user_highpage(page, address);
add_mm_counter(vma->vm_mm, MM_ANONPAGES, 1);
} else {
src_page = pte_page(pteval);
+ if (PageHWPoison(src_page))
+ return 0;
+
copy_user_highpage(page, src_page, address, vma);
VM_BUG_ON(page_mapcount(src_page) != 1);
VM_BUG_ON(page_count(src_page) != 2);
@@ -1724,6 +1730,8 @@ static void __collapse_huge_page_copy(pte_t *pte, struct page *page,
address += PAGE_SIZE;
page++;
}
+
+ return 1;
}
static void __collapse_huge_page_free_old_pte(pte_t *pte,
@@ -1893,7 +1901,9 @@ static void collapse_huge_page(struct mm_struct *mm,
*/
lock_page_nosync(new_page);
- __collapse_huge_page_copy(pte, new_page, vma, address);
+ if (__collapse_huge_page_copy(pte, new_page, vma, address) == 0)
+ goto out_poison;
+
pte_unmap(pte);
__SetPageUptodate(new_page);
pgtable = pmd_pgtable(_pmd);
@@ -1930,6 +1940,15 @@ out_up_write:
up_write(&mm->mmap_sem);
return;
+out_poison:
+ release_all_pte_pages(pte);
+ pte_unmap(pte);
+ spin_lock(&mm->page_table_lock);
+ BUG_ON(!pmd_none(*pmd));
+ set_pmd_at(mm, address, pmd, _pmd);
+ spin_unlock(&mm->page_table_lock);
+ unlock_page(new_page);
+
out:
mem_cgroup_uncharge_page(new_page);
#ifdef CONFIG_NUMA
--
1.7.1
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists