lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240618232648.4090299-2-ryan.roberts@arm.com>
Date: Wed, 19 Jun 2024 00:26:41 +0100
From: Ryan Roberts <ryan.roberts@....com>
To: Andrew Morton <akpm@...ux-foundation.org>,
	Chris Li <chrisl@...nel.org>,
	Kairui Song <kasong@...cent.com>,
	"Huang, Ying" <ying.huang@...el.com>,
	Kalesh Singh <kaleshsingh@...gle.com>,
	Barry Song <baohua@...nel.org>,
	Hugh Dickins <hughd@...gle.com>,
	David Hildenbrand <david@...hat.com>
Cc: Ryan Roberts <ryan.roberts@....com>,
	linux-kernel@...r.kernel.org,
	linux-mm@...ck.org
Subject: [RFC PATCH v1 1/5] mm: swap: Simplify end-of-cluster calculation

Its possible that a swap file will have a partial cluster at the end, if
the swap size is not a multiple of the cluster size. But this partial
cluster will never be marked free and so scan_swap_map_try_ssd_cluster()
will never see it. Therefore it can always consider that a cluster ends
at the next cluster boundary.

This leads to a simplification of the endpoint calculation and removal
of an unnecessary conditional.

This change has the useful side effect of making lock_cluster()
unconditional, which will be used in a later commit.

Signed-off-by: Ryan Roberts <ryan.roberts@....com>
---
 mm/swapfile.c | 16 +++++++---------
 1 file changed, 7 insertions(+), 9 deletions(-)

diff --git a/mm/swapfile.c b/mm/swapfile.c
index b3e5e384e330..30e79739dfdc 100644
--- a/mm/swapfile.c
+++ b/mm/swapfile.c
@@ -677,16 +677,14 @@ static bool scan_swap_map_try_ssd_cluster(struct swap_info_struct *si,
 	 * check if there is still free entry in the cluster, maintaining
 	 * natural alignment.
 	 */
-	max = min_t(unsigned long, si->max, ALIGN(tmp + 1, SWAPFILE_CLUSTER));
-	if (tmp < max) {
-		ci = lock_cluster(si, tmp);
-		while (tmp < max) {
-			if (swap_range_empty(si->swap_map, tmp, nr_pages))
-				break;
-			tmp += nr_pages;
-		}
-		unlock_cluster(ci);
+	max = ALIGN(tmp + 1, SWAPFILE_CLUSTER);
+	ci = lock_cluster(si, tmp);
+	while (tmp < max) {
+		if (swap_range_empty(si->swap_map, tmp, nr_pages))
+			break;
+		tmp += nr_pages;
 	}
+	unlock_cluster(ci);
 	if (tmp >= max) {
 		cluster->next[order] = SWAP_NEXT_INVALID;
 		goto new_cluster;
--
2.43.0


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ