[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20170619170145.25577-4-punit.agrawal@arm.com>
Date: Mon, 19 Jun 2017 18:01:40 +0100
From: Punit Agrawal <punit.agrawal@....com>
To: akpm@...ux-foundation.org
Cc: Will Deacon <will.deacon@....com>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, linux-arm-kernel@...ts.infradead.org,
catalin.marinas@....com, n-horiguchi@...jp.nec.com,
kirill.shutemov@...ux.intel.com, mike.kravetz@...cle.com,
steve.capper@....com, mark.rutland@....com,
linux-arch@...r.kernel.org, aneesh.kumar@...ux.vnet.ibm.com,
Punit Agrawal <punit.agrawal@....com>
Subject: [PATCH v5 3/8] mm, gup: Remove broken VM_BUG_ON_PAGE compound check for hugepages
From: Will Deacon <will.deacon@....com>
When operating on hugepages with DEBUG_VM enabled, the GUP code checks the
compound head for each tail page prior to calling page_cache_add_speculative.
This is broken, because on the fast-GUP path (where we don't hold any page
table locks) we can be racing with a concurrent invocation of
split_huge_page_to_list.
split_huge_page_to_list deals with this race by using page_ref_freeze to
freeze the page and force concurrent GUPs to fail whilst the component
pages are modified. This modification includes clearing the compound_head
field for the tail pages, so checking this prior to a successful call
to page_cache_add_speculative can lead to false positives: In fact,
page_cache_add_speculative *already* has this check once the page refcount
has been successfully updated, so we can simply remove the broken calls
to VM_BUG_ON_PAGE.
Signed-off-by: Will Deacon <will.deacon@....com>
Acked-by: Steve Capper <steve.capper@....com>
Signed-off-by: Punit Agrawal <punit.agrawal@....com>
Acked-by: "Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>
Cc: Aneesh Kumar K.V <aneesh.kumar@...ux.vnet.ibm.com>
---
mm/gup.c | 3 ---
1 file changed, 3 deletions(-)
diff --git a/mm/gup.c b/mm/gup.c
index b3c7214d710d..e74e0b5a0c7c 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -1357,7 +1357,6 @@ static int gup_huge_pmd(pmd_t orig, pmd_t *pmdp, unsigned long addr,
head = pmd_page(orig);
page = head + ((addr & ~PMD_MASK) >> PAGE_SHIFT);
do {
- VM_BUG_ON_PAGE(compound_head(page) != head, page);
pages[*nr] = page;
(*nr)++;
page++;
@@ -1396,7 +1395,6 @@ static int gup_huge_pud(pud_t orig, pud_t *pudp, unsigned long addr,
head = pud_page(orig);
page = head + ((addr & ~PUD_MASK) >> PAGE_SHIFT);
do {
- VM_BUG_ON_PAGE(compound_head(page) != head, page);
pages[*nr] = page;
(*nr)++;
page++;
@@ -1434,7 +1432,6 @@ static int gup_huge_pgd(pgd_t orig, pgd_t *pgdp, unsigned long addr,
head = pgd_page(orig);
page = head + ((addr & ~PGDIR_MASK) >> PAGE_SHIFT);
do {
- VM_BUG_ON_PAGE(compound_head(page) != head, page);
pages[*nr] = page;
(*nr)++;
page++;
--
2.11.0
Powered by blists - more mailing lists