[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240618-exclusive-gup-v1-1-30472a19c5d1@quicinc.com>
Date: Tue, 18 Jun 2024 17:05:07 -0700
From: Elliot Berman <quic_eberman@...cinc.com>
To: Andrew Morton <akpm@...ux-foundation.org>, Shuah Khan <shuah@...nel.org>,
David Hildenbrand <david@...hat.com>,
Matthew Wilcox <willy@...radead.org>, <maz@...nel.org>
CC: <kvm@...r.kernel.org>, <linux-arm-msm@...r.kernel.org>,
<linux-mm@...ck.org>, <linux-kernel@...r.kernel.org>,
<linux-kselftest@...r.kernel.org>, <pbonzini@...hat.com>,
Elliot Berman
<quic_eberman@...cinc.com>,
Fuad Tabba <tabba@...gle.com>
Subject: [PATCH RFC 1/5] mm/gup: Move GUP_PIN_COUNTING_BIAS to page_ref.h
From: Fuad Tabba <tabba@...gle.com>
No functional change intended.
Signed-off-by: Fuad Tabba <tabba@...gle.com>
Signed-off-by: Elliot Berman <quic_eberman@...cinc.com>
---
include/linux/mm.h | 32 --------------------------------
include/linux/page_ref.h | 32 ++++++++++++++++++++++++++++++++
2 files changed, 32 insertions(+), 32 deletions(-)
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 9849dfda44d43..fd0d10b08e7ac 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1580,38 +1580,6 @@ static inline void put_page(struct page *page)
folio_put(folio);
}
-/*
- * GUP_PIN_COUNTING_BIAS, and the associated functions that use it, overload
- * the page's refcount so that two separate items are tracked: the original page
- * reference count, and also a new count of how many pin_user_pages() calls were
- * made against the page. ("gup-pinned" is another term for the latter).
- *
- * With this scheme, pin_user_pages() becomes special: such pages are marked as
- * distinct from normal pages. As such, the unpin_user_page() call (and its
- * variants) must be used in order to release gup-pinned pages.
- *
- * Choice of value:
- *
- * By making GUP_PIN_COUNTING_BIAS a power of two, debugging of page reference
- * counts with respect to pin_user_pages() and unpin_user_page() becomes
- * simpler, due to the fact that adding an even power of two to the page
- * refcount has the effect of using only the upper N bits, for the code that
- * counts up using the bias value. This means that the lower bits are left for
- * the exclusive use of the original code that increments and decrements by one
- * (or at least, by much smaller values than the bias value).
- *
- * Of course, once the lower bits overflow into the upper bits (and this is
- * OK, because subtraction recovers the original values), then visual inspection
- * no longer suffices to directly view the separate counts. However, for normal
- * applications that don't have huge page reference counts, this won't be an
- * issue.
- *
- * Locking: the lockless algorithm described in folio_try_get_rcu()
- * provides safe operation for get_user_pages(), page_mkclean() and
- * other calls that race to set up page table entries.
- */
-#define GUP_PIN_COUNTING_BIAS (1U << 10)
-
void unpin_user_page(struct page *page);
void unpin_user_pages_dirty_lock(struct page **pages, unsigned long npages,
bool make_dirty);
diff --git a/include/linux/page_ref.h b/include/linux/page_ref.h
index 1acf5bac7f503..e6aeaafb143ca 100644
--- a/include/linux/page_ref.h
+++ b/include/linux/page_ref.h
@@ -62,6 +62,38 @@ static inline void __page_ref_unfreeze(struct page *page, int v)
#endif
+/*
+ * GUP_PIN_COUNTING_BIAS, and the associated functions that use it, overload
+ * the page's refcount so that two separate items are tracked: the original page
+ * reference count, and also a new count of how many pin_user_pages() calls were
+ * made against the page. ("gup-pinned" is another term for the latter).
+ *
+ * With this scheme, pin_user_pages() becomes special: such pages are marked as
+ * distinct from normal pages. As such, the unpin_user_page() call (and its
+ * variants) must be used in order to release gup-pinned pages.
+ *
+ * Choice of value:
+ *
+ * By making GUP_PIN_COUNTING_BIAS a power of two, debugging of page reference
+ * counts with respect to pin_user_pages() and unpin_user_page() becomes
+ * simpler, due to the fact that adding an even power of two to the page
+ * refcount has the effect of using only the upper N bits, for the code that
+ * counts up using the bias value. This means that the lower bits are left for
+ * the exclusive use of the original code that increments and decrements by one
+ * (or at least, by much smaller values than the bias value).
+ *
+ * Of course, once the lower bits overflow into the upper bits (and this is
+ * OK, because subtraction recovers the original values), then visual inspection
+ * no longer suffices to directly view the separate counts. However, for normal
+ * applications that don't have huge page reference counts, this won't be an
+ * issue.
+ *
+ * Locking: the lockless algorithm described in folio_try_get_rcu()
+ * provides safe operation for get_user_pages(), page_mkclean() and
+ * other calls that race to set up page table entries.
+ */
+#define GUP_PIN_COUNTING_BIAS (1U << 10)
+
static inline int page_ref_count(const struct page *page)
{
return atomic_read(&page->_refcount);
--
2.34.1
Powered by blists - more mailing lists