[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20220624173656.2033256-11-jthoughton@google.com>
Date: Fri, 24 Jun 2022 17:36:40 +0000
From: James Houghton <jthoughton@...gle.com>
To: Mike Kravetz <mike.kravetz@...cle.com>,
Muchun Song <songmuchun@...edance.com>,
Peter Xu <peterx@...hat.com>
Cc: David Hildenbrand <david@...hat.com>,
David Rientjes <rientjes@...gle.com>,
Axel Rasmussen <axelrasmussen@...gle.com>,
Mina Almasry <almasrymina@...gle.com>,
Jue Wang <juew@...gle.com>,
Manish Mishra <manish.mishra@...anix.com>,
"Dr . David Alan Gilbert" <dgilbert@...hat.com>,
linux-mm@...ck.org, linux-kernel@...r.kernel.org,
James Houghton <jthoughton@...gle.com>
Subject: [RFC PATCH 10/26] hugetlb: add for_each_hgm_shift
This is a helper macro to loop through all the usable page sizes for a
high-granularity-enabled HugeTLB VMA. Given the VMA's hstate, it will
loop, in descending order, through the page sizes that HugeTLB supports
for this architecture; it always includes PAGE_SIZE.
Signed-off-by: James Houghton <jthoughton@...gle.com>
---
mm/hugetlb.c | 10 ++++++++++
1 file changed, 10 insertions(+)
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 8b10b941458d..557b0afdb503 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -6989,6 +6989,16 @@ bool hugetlb_hgm_enabled(struct vm_area_struct *vma)
/* All shared VMAs have HGM enabled. */
return vma->vm_flags & VM_SHARED;
}
+static unsigned int __shift_for_hstate(struct hstate *h)
+{
+ if (h >= &hstates[hugetlb_max_hstate])
+ return PAGE_SHIFT;
+ return huge_page_shift(h);
+}
+#define for_each_hgm_shift(hstate, tmp_h, shift) \
+ for ((tmp_h) = hstate; (shift) = __shift_for_hstate(tmp_h), \
+ (tmp_h) <= &hstates[hugetlb_max_hstate]; \
+ (tmp_h)++)
#endif /* CONFIG_HUGETLB_HIGH_GRANULARITY_MAPPING */
/*
--
2.37.0.rc0.161.g10f37bed90-goog
Powered by blists - more mailing lists