[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20220824175757.20590-6-mike.kravetz@oracle.com>
Date: Wed, 24 Aug 2022 10:57:54 -0700
From: Mike Kravetz <mike.kravetz@...cle.com>
To: linux-mm@...ck.org, linux-kernel@...r.kernel.org
Cc: Muchun Song <songmuchun@...edance.com>,
Miaohe Lin <linmiaohe@...wei.com>,
David Hildenbrand <david@...hat.com>,
Michal Hocko <mhocko@...e.com>, Peter Xu <peterx@...hat.com>,
Naoya Horiguchi <naoya.horiguchi@...ux.dev>,
"Aneesh Kumar K . V" <aneesh.kumar@...ux.vnet.ibm.com>,
Andrea Arcangeli <aarcange@...hat.com>,
"Kirill A . Shutemov" <kirill.shutemov@...ux.intel.com>,
Davidlohr Bueso <dave@...olabs.net>,
Prakash Sangappa <prakash.sangappa@...cle.com>,
James Houghton <jthoughton@...gle.com>,
Mina Almasry <almasrymina@...gle.com>,
Pasha Tatashin <pasha.tatashin@...een.com>,
Axel Rasmussen <axelrasmussen@...gle.com>,
Ray Fucillo <Ray.Fucillo@...ersystems.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Mike Kravetz <mike.kravetz@...cle.com>
Subject: [PATCH 5/8] hugetlb: rename vma_shareable() and refactor code
Rename the routine vma_shareable to vma_addr_pmd_shareable as it is
checking a specific address within the vma. Refactor code to check if
an aligned range is shareable as this will be needed in a subsequent
patch.
Signed-off-by: Mike Kravetz <mike.kravetz@...cle.com>
---
mm/hugetlb.c | 19 +++++++++++++------
1 file changed, 13 insertions(+), 6 deletions(-)
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index a6eb46c64baf..758b6844d566 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -6648,26 +6648,33 @@ static unsigned long page_table_shareable(struct vm_area_struct *svma,
return saddr;
}
-static bool vma_shareable(struct vm_area_struct *vma, unsigned long addr)
+static bool __vma_aligned_range_pmd_shareable(struct vm_area_struct *vma,
+ unsigned long start, unsigned long end)
{
- unsigned long base = addr & PUD_MASK;
- unsigned long end = base + PUD_SIZE;
-
/*
* check on proper vm_flags and page table alignment
*/
- if (vma->vm_flags & VM_MAYSHARE && range_in_vma(vma, base, end))
+ if (vma->vm_flags & VM_MAYSHARE && range_in_vma(vma, start, end))
return true;
return false;
}
+static bool vma_addr_pmd_shareable(struct vm_area_struct *vma,
+ unsigned long addr)
+{
+ unsigned long start = addr & PUD_MASK;
+ unsigned long end = start + PUD_SIZE;
+
+ return __vma_aligned_range_pmd_shareable(vma, start, end);
+}
+
bool want_pmd_share(struct vm_area_struct *vma, unsigned long addr)
{
#ifdef CONFIG_USERFAULTFD
if (uffd_disable_huge_pmd_share(vma))
return false;
#endif
- return vma_shareable(vma, addr);
+ return vma_addr_pmd_shareable(vma, addr);
}
/*
--
2.37.1
Powered by blists - more mailing lists