lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20220201092927.242254-1-jhubbard@nvidia.com>
Date:   Tue, 1 Feb 2022 01:29:27 -0800
From:   John Hubbard <jhubbard@...dia.com>
To:     Andrew Morton <akpm@...ux-foundation.org>
CC:     LKML <linux-kernel@...r.kernel.org>, <linux-mm@...ck.org>,
        John Hubbard <jhubbard@...dia.com>,
        Christoph Hellwig <hch@....de>,
        Will McVicker <willmcvicker@...gle.com>,
        Minchan Kim <minchan@...gle.com>,
        Matthew Wilcox <willy@...radead.org>,
        Christian Borntraeger <borntraeger@...ibm.com>,
        "Heiko Carstens" <hca@...ux.ibm.com>,
        Vasily Gorbik <gor@...ux.ibm.com>,
        "Linus Torvalds" <torvalds@...ux-foundation.org>
Subject: [PATCH] Revert mm/gup: small refactoring: simplify try_grab_page()

This reverts commit 54d516b1d62ff8f17cee2da06e5e4706a0d00b8a

That commit did a refactoring that effectively combined fast and slow
gup paths (again). And that was again incorrect, for two reasons:

a) Fast gup and slow gup get reference counts on pages in different ways
and with different goals: see Linus' writeup in commit cd1adf1b63a1
("Revert "mm/gup: remove try_get_page(), call try_get_compound_head()
directly""), and

b) try_grab_compound_head() also has a specific check for "FOLL_LONGTERM
&& !is_pinned(page)", that assumes that the caller can fall back to slow
gup. This resulted in new failures, as recently report by Will McVicker
[1].

But (a) has problems too, even though they may not have been reported
yet. So just revert this.

[1] https://lore.kernel.org/r/20220131203504.3458775-1-willmcvicker@google.com

Fixes: 54d516b1d62f ("mm/gup: small refactoring: simplify try_grab_page()")
Cc: Christoph Hellwig <hch@....de>
Cc: Will McVicker <willmcvicker@...gle.com>
Cc: Minchan Kim <minchan@...gle.com>
Cc: Matthew Wilcox <willy@...radead.org>
Cc: Christian Borntraeger <borntraeger@...ibm.com>
Cc: Heiko Carstens <hca@...ux.ibm.com>
Cc: Vasily Gorbik <gor@...ux.ibm.com>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>
Signed-off-by: John Hubbard <jhubbard@...dia.com>
---
 mm/gup.c | 35 ++++++++++++++++++++++++++++++-----
 1 file changed, 30 insertions(+), 5 deletions(-)

diff --git a/mm/gup.c b/mm/gup.c
index f0af462ac1e2..a9d4d724aef7 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -124,8 +124,8 @@ static inline struct page *try_get_compound_head(struct page *page, int refs)
  * considered failure, and furthermore, a likely bug in the caller, so a warning
  * is also emitted.
  */
-struct page *try_grab_compound_head(struct page *page,
-				    int refs, unsigned int flags)
+__maybe_unused struct page *try_grab_compound_head(struct page *page,
+						   int refs, unsigned int flags)
 {
 	if (flags & FOLL_GET)
 		return try_get_compound_head(page, refs);
@@ -208,10 +208,35 @@ static void put_compound_head(struct page *page, int refs, unsigned int flags)
  */
 bool __must_check try_grab_page(struct page *page, unsigned int flags)
 {
-	if (!(flags & (FOLL_GET | FOLL_PIN)))
-		return true;
+	WARN_ON_ONCE((flags & (FOLL_GET | FOLL_PIN)) == (FOLL_GET | FOLL_PIN));
 
-	return try_grab_compound_head(page, 1, flags);
+	if (flags & FOLL_GET)
+		return try_get_page(page);
+	else if (flags & FOLL_PIN) {
+		int refs = 1;
+
+		page = compound_head(page);
+
+		if (WARN_ON_ONCE(page_ref_count(page) <= 0))
+			return false;
+
+		if (hpage_pincount_available(page))
+			hpage_pincount_add(page, 1);
+		else
+			refs = GUP_PIN_COUNTING_BIAS;
+
+		/*
+		 * Similar to try_grab_compound_head(): even if using the
+		 * hpage_pincount_add/_sub() routines, be sure to
+		 * *also* increment the normal page refcount field at least
+		 * once, so that the page really is pinned.
+		 */
+		page_ref_add(page, refs);
+
+		mod_node_page_state(page_pgdat(page), NR_FOLL_PIN_ACQUIRED, 1);
+	}
+
+	return true;
 }
 
 /**

base-commit: 26291c54e111ff6ba87a164d85d4a4e134b7315c
-- 
2.35.1

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ