[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <9a548862b8a26cbccc14f2c6c9c3688813d8d14b.1428411003.git.jslaby@suse.cz>
Date: Tue, 7 Apr 2015 14:49:30 +0200
From: Jiri Slaby <jslaby@...e.cz>
To: stable@...r.kernel.org
Cc: linux-kernel@...r.kernel.org,
Naoya Horiguchi <n-horiguchi@...jp.nec.com>,
Hugh Dickins <hughd@...gle.com>,
James Hogan <james.hogan@...tec.com>,
David Rientjes <rientjes@...gle.com>,
Mel Gorman <mel@....ul.ie>,
Johannes Weiner <hannes@...xchg.org>,
Michal Hocko <mhocko@...e.cz>, Rik van Riel <riel@...hat.com>,
Andrea Arcangeli <aarcange@...hat.com>,
Luiz Capitulino <lcapitulino@...hat.com>,
Nishanth Aravamudan <nacc@...ux.vnet.ibm.com>,
Lee Schermerhorn <lee.schermerhorn@...com>,
Steve Capper <steve.capper@...aro.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Jiri Slaby <jslaby@...e.cz>
Subject: [PATCH 3.12 001/155] mm/hugetlb: fix getting refcount 0 page in hugetlb_fault()
From: Naoya Horiguchi <n-horiguchi@...jp.nec.com>
3.12-stable review patch. If anyone has any objections, please let me know.
===============
commit 0f792cf949a0be506c2aa8bfac0605746b146dda upstream.
When running the test which causes the race as shown in the previous patch,
we can hit the BUG "get_page() on refcount 0 page" in hugetlb_fault().
This race happens when pte turns into migration entry just after the first
check of is_hugetlb_entry_migration() in hugetlb_fault() passed with false.
To fix this, we need to check pte_present() again after huge_ptep_get().
This patch also reorders taking ptl and doing pte_page(), because
pte_page() should be done in ptl. Due to this reordering, we need use
trylock_page() in page != pagecache_page case to respect locking order.
Fixes: 66aebce747ea ("hugetlb: fix race condition in hugetlb_fault()")
Signed-off-by: Naoya Horiguchi <n-horiguchi@...jp.nec.com>
Cc: Hugh Dickins <hughd@...gle.com>
Cc: James Hogan <james.hogan@...tec.com>
Cc: David Rientjes <rientjes@...gle.com>
Cc: Mel Gorman <mel@....ul.ie>
Cc: Johannes Weiner <hannes@...xchg.org>
Cc: Michal Hocko <mhocko@...e.cz>
Cc: Rik van Riel <riel@...hat.com>
Cc: Andrea Arcangeli <aarcange@...hat.com>
Cc: Luiz Capitulino <lcapitulino@...hat.com>
Cc: Nishanth Aravamudan <nacc@...ux.vnet.ibm.com>
Cc: Lee Schermerhorn <lee.schermerhorn@...com>
Cc: Steve Capper <steve.capper@...aro.org>
Cc: <stable@...r.kernel.org> [3.2+]
Signed-off-by: Andrew Morton <akpm@...ux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@...ux-foundation.org>
Signed-off-by: Jiri Slaby <jslaby@...e.cz> [backport to 3.12]
---
mm/hugetlb.c | 51 ++++++++++++++++++++++++++++++++++++---------------
1 file changed, 36 insertions(+), 15 deletions(-)
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index ed00a70fb052..1b040cb0ac5b 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -2960,6 +2960,7 @@ int hugetlb_fault(struct mm_struct *mm, struct vm_area_struct *vma,
struct page *pagecache_page = NULL;
static DEFINE_MUTEX(hugetlb_instantiation_mutex);
struct hstate *h = hstate_vma(vma);
+ int need_wait_lock = 0;
address &= huge_page_mask(h);
@@ -2993,6 +2994,16 @@ int hugetlb_fault(struct mm_struct *mm, struct vm_area_struct *vma,
ret = 0;
/*
+ * entry could be a migration/hwpoison entry at this point, so this
+ * check prevents the kernel from going below assuming that we have
+ * a active hugepage in pagecache. This goto expects the 2nd page fault,
+ * and is_hugetlb_entry_(migration|hwpoisoned) check will properly
+ * handle it.
+ */
+ if (!pte_present(entry))
+ goto out_mutex;
+
+ /*
* If we are going to COW the mapping later, we examine the pending
* reservations for this page now. This will ensure that any
* allocations necessary to record that reservation occur outside the
@@ -3011,29 +3022,31 @@ int hugetlb_fault(struct mm_struct *mm, struct vm_area_struct *vma,
vma, address);
}
+ spin_lock(&mm->page_table_lock);
+
+ /* Check for a racing update before calling hugetlb_cow */
+ if (unlikely(!pte_same(entry, huge_ptep_get(ptep))))
+ goto out_page_table_lock;
+
/*
* hugetlb_cow() requires page locks of pte_page(entry) and
* pagecache_page, so here we need take the former one
* when page != pagecache_page or !pagecache_page.
- * Note that locking order is always pagecache_page -> page,
- * so no worry about deadlock.
*/
page = pte_page(entry);
- get_page(page);
if (page != pagecache_page)
- lock_page(page);
-
- spin_lock(&mm->page_table_lock);
- /* Check for a racing update before calling hugetlb_cow */
- if (unlikely(!pte_same(entry, huge_ptep_get(ptep))))
- goto out_page_table_lock;
+ if (!trylock_page(page)) {
+ need_wait_lock = 1;
+ goto out_page_table_lock;
+ }
+ get_page(page);
if (flags & FAULT_FLAG_WRITE) {
if (!huge_pte_write(entry)) {
ret = hugetlb_cow(mm, vma, address, ptep, entry,
pagecache_page);
- goto out_page_table_lock;
+ goto out_put_page;
}
entry = huge_pte_mkdirty(entry);
}
@@ -3041,7 +3054,10 @@ int hugetlb_fault(struct mm_struct *mm, struct vm_area_struct *vma,
if (huge_ptep_set_access_flags(vma, address, ptep, entry,
flags & FAULT_FLAG_WRITE))
update_mmu_cache(vma, address, ptep);
-
+out_put_page:
+ if (page != pagecache_page)
+ unlock_page(page);
+ put_page(page);
out_page_table_lock:
spin_unlock(&mm->page_table_lock);
@@ -3049,12 +3065,17 @@ out_page_table_lock:
unlock_page(pagecache_page);
put_page(pagecache_page);
}
- if (page != pagecache_page)
- unlock_page(page);
- put_page(page);
-
out_mutex:
mutex_unlock(&hugetlb_instantiation_mutex);
+ /*
+ * Generally it's safe to hold refcount during waiting page lock. But
+ * here we just wait to defer the next page fault to avoid busy loop and
+ * the page is not used after unlocked before returning from the current
+ * page fault. So we are safe from accessing freed page, even if we wait
+ * here without taking refcount.
+ */
+ if (need_wait_lock)
+ wait_on_page_locked(page);
return ret;
}
--
2.3.4
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists