[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20170619170145.25577-7-punit.agrawal@arm.com>
Date: Mon, 19 Jun 2017 18:01:43 +0100
From: Punit Agrawal <punit.agrawal@....com>
To: akpm@...ux-foundation.org
Cc: Punit Agrawal <punit.agrawal@....com>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, linux-arm-kernel@...ts.infradead.org,
catalin.marinas@....com, will.deacon@....com,
n-horiguchi@...jp.nec.com, kirill.shutemov@...ux.intel.com,
mike.kravetz@...cle.com, steve.capper@....com,
mark.rutland@....com, linux-arch@...r.kernel.org,
aneesh.kumar@...ux.vnet.ibm.com,
Heiko Carstens <heiko.carstens@...ibm.com>
Subject: [PATCH v5 6/8] mm/hugetlb: Allow architectures to override huge_pte_clear()
When unmapping a hugepage range, huge_pte_clear() is used to clear the
page table entries that are marked as not present. huge_pte_clear()
internally just ends up calling pte_clear() which does not correctly
deal with hugepages consisting of contiguous page table entries.
Add a size argument to address this issue and allow architectures to
override huge_pte_clear() by wrapping it in a #ifndef block.
Update s390 implementation with the size parameter as well.
Note that the change only affects huge_pte_clear() - the other generic
hugetlb functions don't need any change.
Signed-off-by: Punit Agrawal <punit.agrawal@....com>
Acked-by: Arnd Bergmann <arnd@...db.de>
Acked-by: Martin Schwidefsky <schwidefsky@...ibm.com>
Cc: Heiko Carstens <heiko.carstens@...ibm.com>
Cc: "Aneesh Kumar K.V" <aneesh.kumar@...ux.vnet.ibm.com>
Cc: Mike Kravetz <mike.kravetz@...cle.com>
---
arch/s390/include/asm/hugetlb.h | 2 +-
include/asm-generic/hugetlb.h | 4 +++-
mm/hugetlb.c | 2 +-
3 files changed, 5 insertions(+), 3 deletions(-)
diff --git a/arch/s390/include/asm/hugetlb.h b/arch/s390/include/asm/hugetlb.h
index cd546a245c68..c0443500baec 100644
--- a/arch/s390/include/asm/hugetlb.h
+++ b/arch/s390/include/asm/hugetlb.h
@@ -39,7 +39,7 @@ static inline int prepare_hugepage_range(struct file *file,
#define arch_clear_hugepage_flags(page) do { } while (0)
static inline void huge_pte_clear(struct mm_struct *mm, unsigned long addr,
- pte_t *ptep)
+ pte_t *ptep, unsigned long sz)
{
if ((pte_val(*ptep) & _REGION_ENTRY_TYPE_MASK) == _REGION_ENTRY_TYPE_R3)
pte_val(*ptep) = _REGION3_ENTRY_EMPTY;
diff --git a/include/asm-generic/hugetlb.h b/include/asm-generic/hugetlb.h
index 99b490b4d05a..540354f94f83 100644
--- a/include/asm-generic/hugetlb.h
+++ b/include/asm-generic/hugetlb.h
@@ -31,10 +31,12 @@ static inline pte_t huge_pte_modify(pte_t pte, pgprot_t newprot)
return pte_modify(pte, newprot);
}
+#ifndef huge_pte_clear
static inline void huge_pte_clear(struct mm_struct *mm, unsigned long addr,
- pte_t *ptep)
+ pte_t *ptep, unsigned long sz)
{
pte_clear(mm, addr, ptep);
}
+#endif
#endif /* _ASM_GENERIC_HUGETLB_H */
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index d9f9e4b7381c..b20620ff3751 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -3338,7 +3338,7 @@ void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma,
* unmapped and its refcount is dropped, so just clear pte here.
*/
if (unlikely(!pte_present(pte))) {
- huge_pte_clear(mm, address, ptep);
+ huge_pte_clear(mm, address, ptep, sz);
spin_unlock(ptl);
continue;
}
--
2.11.0
Powered by blists - more mailing lists