lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:   Sun, 16 Jan 2022 15:21:51 +0000
From:   cgel.zte@...il.com
To:     akpm@...ux-foundation.org, hannes@...xchg.org, sfr@...b.auug.org.au
Cc:     linux-mm@...ck.org, linux-kernel@...r.kernel.org,
        Yang Yang <yang.yang29@....com.cn>
Subject: [PATCH] psi: Treat ksm swapping in copy as memstall

From: Yang Yang <yang.yang29@....com.cn>

When faults in from swap what used to be a ksm page and that page
had been swapped in before, system has to make a copy. Obviously
this kind of copy is related to high memory pressure, so we treat
it as memstall. Although ksm page merging is not because of high
memory pressure.

Information of this new kind of stall will help psi to account
memory pressure more precise.

Signed-off-by: Yang Yang <yang.yang29@....com.cn>
---
 mm/ksm.c | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/mm/ksm.c b/mm/ksm.c
index 4a7f8614e57d..d4ec6773f9b8 100644
--- a/mm/ksm.c
+++ b/mm/ksm.c
@@ -39,6 +39,7 @@
 #include <linux/freezer.h>
 #include <linux/oom.h>
 #include <linux/numa.h>
+#include <linux/psi.h>
 
 #include <asm/tlbflush.h>
 #include "internal.h"
@@ -2569,6 +2570,7 @@ struct page *ksm_might_need_to_copy(struct page *page,
 {
 	struct anon_vma *anon_vma = page_anon_vma(page);
 	struct page *new_page;
+	unsigned long pflags;
 
 	if (PageKsm(page)) {
 		if (page_stable_node(page) &&
@@ -2583,6 +2585,7 @@ struct page *ksm_might_need_to_copy(struct page *page,
 	if (!PageUptodate(page))
 		return page;		/* let do_swap_page report the error */
 
+	psi_memstall_enter(&pflags);
 	new_page = alloc_page_vma(GFP_HIGHUSER_MOVABLE, vma, address);
 	if (new_page &&
 	    mem_cgroup_charge(page_folio(new_page), vma->vm_mm, GFP_KERNEL)) {
@@ -2600,6 +2603,7 @@ struct page *ksm_might_need_to_copy(struct page *page,
 #endif
 	}
 
+	psi_memstall_leave(&pflags);
 	return new_page;
 }
 
-- 
2.25.1

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ