lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 2 Nov 2017 20:35:19 +0800
From:   <zhouxianrong@...wei.com>
To:     <linux-mm@...ck.org>
CC:     <linux-kernel@...r.kernel.org>, <akpm@...ux-foundation.org>,
        <jack@...e.cz>, <kirill.shutemov@...ux.intel.com>,
        <ross.zwisler@...ux.intel.com>, <mhocko@...e.com>,
        <dave.jiang@...el.com>, <aneesh.kumar@...ux.vnet.ibm.com>,
        <minchan@...nel.org>, <mingo@...nel.org>, <jglisse@...hat.com>,
        <willy@...ux.intel.com>, <hughd@...gle.com>,
        <zhouxianrong@...wei.com>, <zhouxiyu@...wei.com>,
        <weidu.du@...wei.com>, <fanghua3@...wei.com>, <hutj@...wei.com>,
        <won.ho.park@...wei.com>
Subject: [PATCH] mm: try to free swap only for reading swap fault

From: zhouxianrong <zhouxianrong@...wei.com>

the purpose of this patch is that when a reading swap fault
happens on a clean swap cache page whose swap count is equal
to one, then try_to_free_swap could remove this page from 
swap cache and mark this page dirty. so if later we reclaimed
this page then we could pageout this page due to this dirty.
so i want to allow this action only for writing swap fault.

i sampled the data of non-dirty anonymous pages which is no
need to pageout and total anonymous pages in shrink_page_list.

the results are:

        non-dirty anonymous pages     total anonymous pages
before  26343                         635218
after   36907                         634312

Signed-off-by: zhouxianrong <zhouxianrong@...wei.com>
---
 mm/memory.c |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mm/memory.c b/mm/memory.c
index a728bed..5a944fe 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -2999,7 +2999,7 @@ int do_swap_page(struct vm_fault *vmf)
 	}
 
 	swap_free(entry);
-	if (mem_cgroup_swap_full(page) ||
+	if (((vmf->flags & FAULT_FLAG_WRITE) && mem_cgroup_swap_full(page)) ||
 	    (vma->vm_flags & VM_LOCKED) || PageMlocked(page))
 		try_to_free_swap(page);
 	unlock_page(page);
-- 
1.7.9.5

Powered by blists - more mailing lists