lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Tue,  4 Jun 2024 12:24:44 +0800
From: alexs@...nel.org
To: Andrew Morton <akpm@...ux-foundation.org>,
	linux-mm@...ck.org,
	linux-kernel@...r.kernel.org,
	izik.eidus@...ellosystems.com,
	willy@...radead.org,
	aarcange@...hat.com,
	chrisw@...s-sol.org,
	hughd@...gle.com,
	david@...hat.com
Cc: "Alex Shi (tencent)" <alexs@...nel.org>
Subject: [PATCH 02/10] mm/ksm: skip subpages of compound pages

From: "Alex Shi (tencent)" <alexs@...nel.org>

When a folio isn't fit for KSM, the subpages are unlikely to be good,
So let's skip the rest page checking to save some actions.

Signed-off-by: Alex Shi (tencent) <alexs@...nel.org>
---
 mm/ksm.c | 9 +++++++--
 1 file changed, 7 insertions(+), 2 deletions(-)

diff --git a/mm/ksm.c b/mm/ksm.c
index 97e5b41f8c4b..e2fdb9dd98e2 100644
--- a/mm/ksm.c
+++ b/mm/ksm.c
@@ -2644,6 +2644,8 @@ static struct ksm_rmap_item *scan_get_next_rmap_item(struct page **page)
 		goto no_vmas;
 
 	for_each_vma(vmi, vma) {
+		int nr = 1;
+
 		if (!(vma->vm_flags & VM_MERGEABLE))
 			continue;
 		if (ksm_scan.address < vma->vm_start)
@@ -2660,6 +2662,9 @@ static struct ksm_rmap_item *scan_get_next_rmap_item(struct page **page)
 				cond_resched();
 				continue;
 			}
+
+			VM_WARN_ON(PageTail(*page));
+			nr = compound_nr(*page);
 			if (is_zone_device_page(*page))
 				goto next_page;
 			if (PageAnon(*page)) {
@@ -2672,7 +2677,7 @@ static struct ksm_rmap_item *scan_get_next_rmap_item(struct page **page)
 					if (should_skip_rmap_item(*page, rmap_item))
 						goto next_page;
 
-					ksm_scan.address += PAGE_SIZE;
+					ksm_scan.address += nr * PAGE_SIZE;
 				} else
 					put_page(*page);
 				mmap_read_unlock(mm);
@@ -2680,7 +2685,7 @@ static struct ksm_rmap_item *scan_get_next_rmap_item(struct page **page)
 			}
 next_page:
 			put_page(*page);
-			ksm_scan.address += PAGE_SIZE;
+			ksm_scan.address += nr * PAGE_SIZE;
 			cond_resched();
 		}
 	}
-- 
2.43.0


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ