[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20220204195852.1751729-15-willy@infradead.org>
Date: Fri, 4 Feb 2022 19:57:51 +0000
From: "Matthew Wilcox (Oracle)" <willy@...radead.org>
To: linux-mm@...ck.org
Cc: "Matthew Wilcox (Oracle)" <willy@...radead.org>,
linux-kernel@...r.kernel.org, Christoph Hellwig <hch@....de>,
John Hubbard <jhubbard@...dia.com>,
Jason Gunthorpe <jgg@...dia.com>,
William Kucharski <william.kucharski@...cle.com>
Subject: [PATCH 14/75] mm: Turn page_maybe_dma_pinned() into folio_maybe_dma_pinned()
Replace three calls to compound_head() with one. This removes the last
user of compound_pincount(), so remove that helper too.
Signed-off-by: Matthew Wilcox (Oracle) <willy@...radead.org>
Reviewed-by: Christoph Hellwig <hch@....de>
Reviewed-by: John Hubbard <jhubbard@...dia.com>
Reviewed-by: Jason Gunthorpe <jgg@...dia.com>
Reviewed-by: William Kucharski <william.kucharski@...cle.com>
---
include/linux/mm.h | 49 ++++++++++++++++++++++------------------------
1 file changed, 23 insertions(+), 26 deletions(-)
diff --git a/include/linux/mm.h b/include/linux/mm.h
index d5f0f2cfd552..a29dacec7294 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -901,13 +901,6 @@ static inline int head_compound_pincount(struct page *head)
return atomic_read(compound_pincount_ptr(head));
}
-static inline int compound_pincount(struct page *page)
-{
- VM_BUG_ON_PAGE(!PageCompound(page), page);
- page = compound_head(page);
- return head_compound_pincount(page);
-}
-
static inline void set_compound_order(struct page *page, unsigned int order)
{
page[1].compound_order = order;
@@ -1280,48 +1273,52 @@ void unpin_user_page_range_dirty_lock(struct page *page, unsigned long npages,
void unpin_user_pages(struct page **pages, unsigned long npages);
/**
- * page_maybe_dma_pinned - Report if a page is pinned for DMA.
- * @page: The page.
+ * folio_maybe_dma_pinned - Report if a folio may be pinned for DMA.
+ * @folio: The folio.
*
- * This function checks if a page has been pinned via a call to
+ * This function checks if a folio has been pinned via a call to
* a function in the pin_user_pages() family.
*
- * For non-huge pages, the return value is partially fuzzy: false is not fuzzy,
+ * For small folios, the return value is partially fuzzy: false is not fuzzy,
* because it means "definitely not pinned for DMA", but true means "probably
* pinned for DMA, but possibly a false positive due to having at least
- * GUP_PIN_COUNTING_BIAS worth of normal page references".
+ * GUP_PIN_COUNTING_BIAS worth of normal folio references".
*
- * False positives are OK, because: a) it's unlikely for a page to get that many
- * refcounts, and b) all the callers of this routine are expected to be able to
- * deal gracefully with a false positive.
+ * False positives are OK, because: a) it's unlikely for a folio to
+ * get that many refcounts, and b) all the callers of this routine are
+ * expected to be able to deal gracefully with a false positive.
*
- * For huge pages, the result will be exactly correct. That's because we have
- * more tracking data available: the 3rd struct page in the compound page is
- * used to track the pincount (instead using of the GUP_PIN_COUNTING_BIAS
- * scheme).
+ * For large folios, the result will be exactly correct. That's because
+ * we have more tracking data available: the compound_pincount is used
+ * instead of the GUP_PIN_COUNTING_BIAS scheme.
*
* For more information, please see Documentation/core-api/pin_user_pages.rst.
*
* Return: True, if it is likely that the page has been "dma-pinned".
* False, if the page is definitely not dma-pinned.
*/
-static inline bool page_maybe_dma_pinned(struct page *page)
+static inline bool folio_maybe_dma_pinned(struct folio *folio)
{
- if (PageCompound(page))
- return compound_pincount(page) > 0;
+ if (folio_test_large(folio))
+ return atomic_read(folio_pincount_ptr(folio)) > 0;
/*
- * page_ref_count() is signed. If that refcount overflows, then
- * page_ref_count() returns a negative value, and callers will avoid
+ * folio_ref_count() is signed. If that refcount overflows, then
+ * folio_ref_count() returns a negative value, and callers will avoid
* further incrementing the refcount.
*
- * Here, for that overflow case, use the signed bit to count a little
+ * Here, for that overflow case, use the sign bit to count a little
* bit higher via unsigned math, and thus still get an accurate result.
*/
- return ((unsigned int)page_ref_count(compound_head(page))) >=
+ return ((unsigned int)folio_ref_count(folio)) >=
GUP_PIN_COUNTING_BIAS;
}
+static inline bool page_maybe_dma_pinned(struct page *page)
+{
+ return folio_maybe_dma_pinned(page_folio(page));
+}
+
static inline bool is_cow_mapping(vm_flags_t flags)
{
return (flags & (VM_SHARED | VM_MAYWRITE)) == VM_MAYWRITE;
--
2.34.1
Powered by blists - more mailing lists