lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20191108093814.16032-5-vbabka@suse.cz>
Date:   Fri,  8 Nov 2019 10:38:10 +0100
From:   Vlastimil Babka <vbabka@...e.cz>
To:     stable@...r.kernel.org
Cc:     linux-mm@...ck.org, linux-kernel@...r.kernel.org,
        Ajay Kaher <akaher@...are.com>,
        Linus Torvalds <torvalds@...ux-foundation.org>,
        Matthew Wilcox <willy@...radead.org>,
        Jann Horn <jannh@...gle.com>, stable@...nel.org,
        Vlastimil Babka <vbabka@...e.cz>
Subject: [PATCH STABLE 4.4 4/8] mm: add 'try_get_page()' helper function

From: Linus Torvalds <torvalds@...ux-foundation.org>

commit 88b1a17dfc3ed7728316478fae0f5ad508f50397 upstream.

[ 4.4 backport: get_page() is more complicated due to special handling
  of tail pages via __get_page_tail(). But in all cases, eventually the
  compound head page's refcount is incremented. So try_get_page() just
  checks compound head's refcount for overflow and then simply calls
  get_page().								 ]

This is the same as the traditional 'get_page()' function, but instead
of unconditionally incrementing the reference count of the page, it only
does so if the count was "safe".  It returns whether the reference count
was incremented (and is marked __must_check, since the caller obviously
has to be aware of it).

Also like 'get_page()', you can't use this function unless you already
had a reference to the page.  The intent is that you can use this
exactly like get_page(), but in situations where you want to limit the
maximum reference count.

The code currently does an unconditional WARN_ON_ONCE() if we ever hit
the reference count issues (either zero or negative), as a notification
that the conditional non-increment actually happened.

NOTE! The count access for the "safety" check is inherently racy, but
that doesn't matter since the buffer we use is basically half the range
of the reference count (ie we look at the sign of the count).

Acked-by: Matthew Wilcox <willy@...radead.org>
Cc: Jann Horn <jannh@...gle.com>
Cc: stable@...nel.org
Signed-off-by: Linus Torvalds <torvalds@...ux-foundation.org>
Signed-off-by: Vlastimil Babka <vbabka@...e.cz>
---
 include/linux/mm.h | 15 +++++++++++++++
 1 file changed, 15 insertions(+)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index 997edfcb0a30..78358aeb7732 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -510,6 +510,21 @@ static inline void get_page(struct page *page)
 	atomic_inc(&page->_count);
 }
 
+static inline __must_check bool try_get_page(struct page *page)
+{
+	struct page *head = compound_head(page);
+
+	/*
+	 * get_page() increases always head page's refcount, either directly or
+	 * via __get_page_tail() for tail page, so we check that
+	 */
+	if (WARN_ON_ONCE(page_ref_count(head) <= 0))
+		return false;
+
+	get_page(page);
+	return true;
+}
+
 static inline struct page *virt_to_head_page(const void *x)
 {
 	struct page *page = virt_to_page(x);
-- 
2.23.0

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ