lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 3 Mar 2017 11:52:37 +0900
From:   Minchan Kim <minchan@...nel.org>
To:     <linux-kernel@...r.kernel.org>
CC:     <shli@...com>, <hannes@...xchg.org>, <hillf.zj@...baba-inc.com>,
        <hughd@...gle.com>, <mgorman@...hsingularity.net>,
        <mhocko@...e.com>, <riel@...hat.com>, <mm-commits@...r.kernel.org>
Subject: Re: + mm-reclaim-madv_free-pages.patch added to -mm tree

Hi,

On Tue, Feb 28, 2017 at 04:32:38PM -0800, akpm@...ux-foundation.org wrote:
> 
> The patch titled
>      Subject: mm: reclaim MADV_FREE pages
> has been added to the -mm tree.  Its filename is
>      mm-reclaim-madv_free-pages.patch
> 
> This patch should soon appear at
>     http://ozlabs.org/~akpm/mmots/broken-out/mm-reclaim-madv_free-pages.patch
> and later at
>     http://ozlabs.org/~akpm/mmotm/broken-out/mm-reclaim-madv_free-pages.patch
> 
> Before you just go and hit "reply", please:
>    a) Consider who else should be cc'ed
>    b) Prefer to cc a suitable mailing list as well
>    c) Ideally: find the original patch on the mailing list and do a
>       reply-to-all to that, adding suitable additional cc's
> 
> *** Remember to use Documentation/SubmitChecklist when testing your code ***
> 
> The -mm tree is included into linux-next and is updated
> there every 3-4 working days
> 
> ------------------------------------------------------
> From: Shaohua Li <shli@...com>
> Subject: mm: reclaim MADV_FREE pages
> 
> When memory pressure is high, we free MADV_FREE pages.  If the pages are
> not dirty in pte, the pages could be freed immediately.  Otherwise we
> can't reclaim them.  We put the pages back to anonumous LRU list (by
> setting SwapBacked flag) and the pages will be reclaimed in normal swapout
> way.
> 
> We use normal page reclaim policy.  Since MADV_FREE pages are put into
> inactive file list, such pages and inactive file pages are reclaimed
> according to their age.  This is expected, because we don't want to
> reclaim too many MADV_FREE pages before used once pages.
> 
> Based on Minchan's original patch
> 
> Link: http://lkml.kernel.org/r/14b8eb1d3f6bf6cc492833f183ac8c304e560484.1487965799.git.shli@fb.com
> Signed-off-by: Shaohua Li <shli@...com>
> Acked-by: Minchan Kim <minchan@...nel.org>
> Acked-by: Michal Hocko <mhocko@...e.com>
> Acked-by: Johannes Weiner <hannes@...xchg.org>
> Acked-by: Hillf Danton <hillf.zj@...baba-inc.com>
> Cc: Hugh Dickins <hughd@...gle.com>
> Cc: Rik van Riel <riel@...hat.com>
> Cc: Mel Gorman <mgorman@...hsingularity.net>
> Signed-off-by: Andrew Morton <akpm@...ux-foundation.org>
> ---

< snip >

> @@ -1419,11 +1413,21 @@ static int try_to_unmap_one(struct page
>  			VM_BUG_ON_PAGE(!PageSwapCache(page) && PageSwapBacked(page),
>  				page);
>  
> -			if (!PageDirty(page)) {
> +			/*
> +			 * swapin page could be clean, it has data stored in
> +			 * swap. We can't silently discard it without setting
> +			 * swap entry in the page table.
> +			 */
> +			if (!PageDirty(page) && !PageSwapCache(page)) {
>  				/* It's a freeable page by MADV_FREE */
>  				dec_mm_counter(mm, MM_ANONPAGES);
> -				rp->lazyfreed++;
>  				goto discard;
> +			} else if (!PageSwapBacked(page)) {
> +				/* dirty MADV_FREE page */
> +				set_pte_at(mm, address, pvmw.pte, pteval);
> +				ret = SWAP_DIRTY;
> +				page_vma_mapped_walk_done(&pvmw);
> +				break;
>  			}

There is no point to make this logic complicated with clean swapin-page.

Andrew,
Could you fold below patch into the mm-reclaim-madv_free-pages.patch
if others are not against?

Thanks.

>From 0c28f6560fbc4e65da4f4a8cc4664ab9f7b11cf3 Mon Sep 17 00:00:00 2001
From: Minchan Kim <minchan@...nel.org>
Date: Fri, 3 Mar 2017 11:42:52 +0900
Subject: [PATCH] mm: clean up lazyfree page handling

We can make it simple to understand without need to be aware of
clean-swapin page.
This patch just clean up lazyfree page handling in try_to_unmap_one.

Signed-off-by: Minchan Kim <minchan@...nel.org>
---
 mm/rmap.c | 22 +++++++++++-----------
 1 file changed, 11 insertions(+), 11 deletions(-)

diff --git a/mm/rmap.c b/mm/rmap.c
index bb45712..f7eab40 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -1413,17 +1413,17 @@ static int try_to_unmap_one(struct page *page, struct vm_area_struct *vma,
 			VM_BUG_ON_PAGE(!PageSwapCache(page) && PageSwapBacked(page),
 				page);
 
-			/*
-			 * swapin page could be clean, it has data stored in
-			 * swap. We can't silently discard it without setting
-			 * swap entry in the page table.
-			 */
-			if (!PageDirty(page) && !PageSwapCache(page)) {
-				/* It's a freeable page by MADV_FREE */
-				dec_mm_counter(mm, MM_ANONPAGES);
-				goto discard;
-			} else if (!PageSwapBacked(page)) {
-				/* dirty MADV_FREE page */
+			/* MADV_FREE page check */
+			if (!PageSwapBacked(page)) {
+				if (!PageDirty(page)) {
+					dec_mm_counter(mm, MM_ANONPAGES);
+					goto discard;
+				}
+
+				/*
+				 * If the page was redirtied, it cannot be
+				 * discarded. Remap the page to page table.
+				 */
 				set_pte_at(mm, address, pvmw.pte, pteval);
 				ret = SWAP_DIRTY;
 				page_vma_mapped_walk_done(&pvmw);
-- 
2.7.4

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ