lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 27 Feb 2017 08:19:08 -0800
From:   Shaohua Li <shli@...com>
To:     Minchan Kim <minchan@...nel.org>
CC:     <linux-mm@...ck.org>, <linux-kernel@...r.kernel.org>,
        <Kernel-team@...com>, <mhocko@...e.com>, <hughd@...gle.com>,
        <hannes@...xchg.org>, <riel@...hat.com>,
        <mgorman@...hsingularity.net>, <akpm@...ux-foundation.org>
Subject: Re: [PATCH V5 4/6] mm: reclaim MADV_FREE pages

On Mon, Feb 27, 2017 at 03:33:15PM +0900, Minchan Kim wrote:
> Hi Shaohua,
> 
> On Fri, Feb 24, 2017 at 01:31:47PM -0800, Shaohua Li wrote:
> > When memory pressure is high, we free MADV_FREE pages. If the pages are
> > not dirty in pte, the pages could be freed immediately. Otherwise we
> > can't reclaim them. We put the pages back to anonumous LRU list (by
> > setting SwapBacked flag) and the pages will be reclaimed in normal
> > swapout way.
> > 
> > We use normal page reclaim policy. Since MADV_FREE pages are put into
> > inactive file list, such pages and inactive file pages are reclaimed
> > according to their age. This is expected, because we don't want to
> > reclaim too many MADV_FREE pages before used once pages.
> > 
> > Based on Minchan's original patch
> > 
> > Cc: Michal Hocko <mhocko@...e.com>
> > Cc: Minchan Kim <minchan@...nel.org>
> > Cc: Hugh Dickins <hughd@...gle.com>
> > Cc: Johannes Weiner <hannes@...xchg.org>
> > Cc: Rik van Riel <riel@...hat.com>
> > Cc: Mel Gorman <mgorman@...hsingularity.net>
> > Cc: Andrew Morton <akpm@...ux-foundation.org>
> > Signed-off-by: Shaohua Li <shli@...com>
> > ---
> >  include/linux/rmap.h |  2 +-
> >  mm/huge_memory.c     |  2 ++
> >  mm/madvise.c         |  1 +
> >  mm/rmap.c            | 40 +++++++++++++++++-----------------------
> >  mm/vmscan.c          | 34 ++++++++++++++++++++++------------
> >  5 files changed, 43 insertions(+), 36 deletions(-)
> > 
> > diff --git a/include/linux/rmap.h b/include/linux/rmap.h
> > index 7a39414..fee10d7 100644
> > --- a/include/linux/rmap.h
> > +++ b/include/linux/rmap.h
> > @@ -298,6 +298,6 @@ static inline int page_mkclean(struct page *page)
> >  #define SWAP_AGAIN	1
> >  #define SWAP_FAIL	2
> >  #define SWAP_MLOCK	3
> > -#define SWAP_LZFREE	4
> > +#define SWAP_DIRTY	4
> 
> I still don't convinced why we should introduce SWAP_DIRTY in try_to_unmap.
> https://marc.info/?l=linux-mm&m=148797879123238&w=2
> 
> We have been SetPageMlocked in there but why cannot we SetPageSwapBacked
> in there? It's not a thing to change LRU type but it's just indication
> we found the page's status changed in late.

This one I don't have strong preference. Personally I agree with Johannes,
handling failure in vmscan sounds better. But since the failure handling is
just one statement, this probably doesn't make too much difference. If Johannes
and you made an agreement, I'll follow.

Thanks,
Shaohua

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ