[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20200515131656.12890-6-willy@infradead.org>
Date: Fri, 15 May 2020 06:16:25 -0700
From: Matthew Wilcox <willy@...radead.org>
To: linux-fsdevel@...r.kernel.org
Cc: "Matthew Wilcox (Oracle)" <willy@...radead.org>,
linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: [PATCH v4 05/36] mm: Introduce thp_order
From: "Matthew Wilcox (Oracle)" <willy@...radead.org>
Like compound_order() except 0 when THP is disabled.
Signed-off-by: Matthew Wilcox (Oracle) <willy@...radead.org>
---
include/linux/huge_mm.h | 6 ++++++
1 file changed, 6 insertions(+)
diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
index e944f9757349..1f6245091917 100644
--- a/include/linux/huge_mm.h
+++ b/include/linux/huge_mm.h
@@ -276,6 +276,11 @@ static inline unsigned long thp_size(struct page *page)
return page_size(page);
}
+static inline unsigned int thp_order(struct page *page)
+{
+ return compound_order(page);
+}
+
struct page *follow_devmap_pmd(struct vm_area_struct *vma, unsigned long addr,
pmd_t *pmd, int flags, struct dev_pagemap **pgmap);
struct page *follow_devmap_pud(struct vm_area_struct *vma, unsigned long addr,
@@ -335,6 +340,7 @@ static inline int hpage_nr_pages(struct page *page)
}
#define thp_size(x) PAGE_SIZE
+#define thp_order(x) 0U
static inline bool __transparent_hugepage_enabled(struct vm_area_struct *vma)
{
--
2.26.2
Powered by blists - more mailing lists