lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [day] [month] [year] [list]
Message-ID: <20240710214350.147864-1-david@redhat.com>
Date: Wed, 10 Jul 2024 23:43:50 +0200
From: David Hildenbrand <david@...hat.com>
To: linux-kernel@...r.kernel.org
Cc: linux-mm@...ck.org,
	David Hildenbrand <david@...hat.com>,
	Zi Yan <ziy@...dia.com>,
	Yosry Ahmed <yosryahmed@...gle.com>,
	Andrew Morton <akpm@...ux-foundation.org>
Subject: [PATCH v2] mm/rmap: cleanup partially-mapped handling in __folio_remove_rmap()

Let's simplify and reduce code indentation. In the RMAP_LEVEL_PTE case, we
already check for nr when computing partially_mapped.

For RMAP_LEVEL_PMD, it's a bit more confusing. Likely, we don't need the
"nr" check, but we could have "nr < nr_pmdmapped" also if we stumbled
into the "/* Raced ahead of another remove and an add? */" case. So
let's simply move the nr check in there.

Note that partially_mapped is always false for small folios.

No functional change intended.

Reviewed-by: Zi Yan <ziy@...dia.com>
Reviewed-by: Yosry Ahmed <yosryahmed@...gle.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>
Signed-off-by: David Hildenbrand <david@...hat.com>
---

v1 -> v2:
* Move comment as well; add RB's.
* CC Andrew ;)

---
 mm/rmap.c | 23 ++++++++++-------------
 1 file changed, 10 insertions(+), 13 deletions(-)

diff --git a/mm/rmap.c b/mm/rmap.c
index 8616308610b9..bbf002667e61 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -1549,22 +1549,19 @@ static __always_inline void __folio_remove_rmap(struct folio *folio,
 			}
 		}
 
-		partially_mapped = nr < nr_pmdmapped;
+		partially_mapped = nr && nr < nr_pmdmapped;
 		break;
 	}
 
-	if (nr) {
-		/*
-		 * Queue anon large folio for deferred split if at least one
-		 * page of the folio is unmapped and at least one page
-		 * is still mapped.
-		 *
-		 * Check partially_mapped first to ensure it is a large folio.
-		 */
-		if (folio_test_anon(folio) && partially_mapped &&
-		    list_empty(&folio->_deferred_list))
-			deferred_split_folio(folio);
-	}
+	/*
+	 * Queue anon large folio for deferred split if at least one page of
+	 * the folio is unmapped and at least one page is still mapped.
+	 *
+	 * Check partially_mapped first to ensure it is a large folio.
+	 */
+	if (partially_mapped && folio_test_anon(folio) &&
+	    list_empty(&folio->_deferred_list))
+		deferred_split_folio(folio);
 	__folio_mod_stat(folio, -nr, -nr_pmdmapped);
 
 	/*
-- 
2.45.2


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ