lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 10 Feb 2017 09:30:09 -0800
From:   Shaohua Li <shli@...com>
To:     Minchan Kim <minchan@...nel.org>
CC:     <linux-kernel@...r.kernel.org>, <linux-mm@...ck.org>,
        <Kernel-team@...com>, <danielmicay@...il.com>, <mhocko@...e.com>,
        <hughd@...gle.com>, <hannes@...xchg.org>, <riel@...hat.com>,
        <mgorman@...hsingularity.net>, <akpm@...ux-foundation.org>
Subject: Re: [PATCH V2 2/7] mm: move MADV_FREE pages into LRU_INACTIVE_FILE
 list

On Fri, Feb 10, 2017 at 03:50:22PM +0900, Minchan Kim wrote:
> Hi Shaohua,

Thanks for your time!
 
> On Fri, Feb 03, 2017 at 03:33:18PM -0800, Shaohua Li wrote:
> > Userspace indicates MADV_FREE pages could be freed without pageout, so
> > it pretty much likes used once file pages. For such pages, we'd like to
> > reclaim them once there is memory pressure. Also it might be unfair
> > reclaiming MADV_FREE pages always before used once file pages and we
> > definitively want to reclaim the pages before other anonymous and file
> > pages.
> > 
> > To speed up MADV_FREE pages reclaim, we put the pages into
> > LRU_INACTIVE_FILE list. The rationale is LRU_INACTIVE_FILE list is tiny
> > nowadays and should be full of used once file pages. Reclaiming
> > MADV_FREE pages will not have much interfere of anonymous and active
> > file pages. And the inactive file pages and MADV_FREE pages will be
> > reclaimed according to their age, so we don't reclaim too many MADV_FREE
> > pages too. Putting the MADV_FREE pages into LRU_INACTIVE_FILE_LIST also
> > means we can reclaim the pages without swap support. This idea is
> > suggested by Johannes.
> > 
> > We also clear the pages SwapBacked flag to indicate they are MADV_FREE
> > pages.
> 
> I think this patch should be merged with 3/7. Otherwise, MADV_FREE will
> be broken during the bisect.

Maybe I should move the patch 3 ahead, then we won't break bisect and still
make the patches clear.

> > Cc: Michal Hocko <mhocko@...e.com>
> > Cc: Minchan Kim <minchan@...nel.org>
> > Cc: Hugh Dickins <hughd@...gle.com>
> > Cc: Johannes Weiner <hannes@...xchg.org>
> > Cc: Rik van Riel <riel@...hat.com>
> > Cc: Mel Gorman <mgorman@...hsingularity.net>
> > Cc: Andrew Morton <akpm@...ux-foundation.org>
> > Signed-off-by: Shaohua Li <shli@...com>
> > ---
> >  include/linux/mm_inline.h     |  5 +++++
> >  include/linux/swap.h          |  2 +-
> >  include/linux/vm_event_item.h |  2 +-
> >  mm/huge_memory.c              |  5 ++---
> >  mm/madvise.c                  |  3 +--
> >  mm/swap.c                     | 50 ++++++++++++++++++++++++-------------------
> >  mm/vmstat.c                   |  1 +
> >  7 files changed, 39 insertions(+), 29 deletions(-)
> > 
> > diff --git a/include/linux/mm_inline.h b/include/linux/mm_inline.h
> > index e030a68..fdded06 100644
> > --- a/include/linux/mm_inline.h
> > +++ b/include/linux/mm_inline.h
> > @@ -22,6 +22,11 @@ static inline int page_is_file_cache(struct page *page)
> >  	return !PageSwapBacked(page);
> >  }
> >  
> > +static inline bool page_is_lazyfree(struct page *page)
> > +{
> > +	return PageAnon(page) && !PageSwapBacked(page);
> > +}
> > +
> 
> trivial:
> 
> How about using PageLazyFree for consistency with other PageXXX?
> As well, use SetPageLazyFree/ClearPageLazyFree rather than using
> raw {Set,Clear}PageSwapBacked.

So SetPageLazyFree == ClearPageSwapBacked, that would be weird. I personally
prefer directly using {Set, Clear}PageSwapBacked, because reader can
immediately know what's happening. If using the PageLazyFree, people always
need to refer the code and check the relationship between PageLazyFree and
PageSwapBacked.
 
> >  static __always_inline void __update_lru_size(struct lruvec *lruvec,
> >  				enum lru_list lru, enum zone_type zid,
> >  				int nr_pages)
> > diff --git a/include/linux/swap.h b/include/linux/swap.h
> > index 45e91dd..486494e 100644
> > --- a/include/linux/swap.h
> > +++ b/include/linux/swap.h
> > @@ -279,7 +279,7 @@ extern void lru_add_drain_cpu(int cpu);
> >  extern void lru_add_drain_all(void);
> >  extern void rotate_reclaimable_page(struct page *page);
> >  extern void deactivate_file_page(struct page *page);
> > -extern void deactivate_page(struct page *page);
> > +extern void mark_page_lazyfree(struct page *page);
> 
> trivial:
> 
> How about "deactivate_lazyfree_page"? IMO, it would show intention
> clear that move the lazy free page to inactive list.
> 
> It's just matter of preference so I'm not strong against.

Yes, I thought about the name a little bit. Don't think we should use
deactivate, because it sounds that only works for active page, while the
function works for both active/inactive pages. I'm open to any suggestions.

> >  extern void swap_setup(void);
> >  
> >  extern void add_page_to_unevictable_list(struct page *page);
> > diff --git a/include/linux/vm_event_item.h b/include/linux/vm_event_item.h
> > index 6aa1b6c..94e58da 100644
> > --- a/include/linux/vm_event_item.h
> > +++ b/include/linux/vm_event_item.h
> > @@ -25,7 +25,7 @@ enum vm_event_item { PGPGIN, PGPGOUT, PSWPIN, PSWPOUT,
> >  		FOR_ALL_ZONES(PGALLOC),
> >  		FOR_ALL_ZONES(ALLOCSTALL),
> >  		FOR_ALL_ZONES(PGSCAN_SKIP),
> > -		PGFREE, PGACTIVATE, PGDEACTIVATE,
> > +		PGFREE, PGACTIVATE, PGDEACTIVATE, PGLAZYFREE,
> >  		PGFAULT, PGMAJFAULT,
> >  		PGLAZYFREED,
> >  		PGREFILL,
> > diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> > index ecf569d..ddb9a94 100644
> > --- a/mm/huge_memory.c
> > +++ b/mm/huge_memory.c
> > @@ -1391,9 +1391,6 @@ bool madvise_free_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma,
> >  		ClearPageDirty(page);
> >  	unlock_page(page);
> >  
> > -	if (PageActive(page))
> > -		deactivate_page(page);
> > -
> >  	if (pmd_young(orig_pmd) || pmd_dirty(orig_pmd)) {
> >  		orig_pmd = pmdp_huge_get_and_clear_full(tlb->mm, addr, pmd,
> >  			tlb->fullmm);
> > @@ -1404,6 +1401,8 @@ bool madvise_free_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma,
> >  		set_pmd_at(mm, addr, pmd, orig_pmd);
> >  		tlb_remove_pmd_tlb_entry(tlb, pmd, addr);
> >  	}
> > +
> > +	mark_page_lazyfree(page);
> >  	ret = true;
> >  out:
> >  	spin_unlock(ptl);
> > diff --git a/mm/madvise.c b/mm/madvise.c
> > index c867d88..c24549e 100644
> > --- a/mm/madvise.c
> > +++ b/mm/madvise.c
> > @@ -378,10 +378,9 @@ static int madvise_free_pte_range(pmd_t *pmd, unsigned long addr,
> >  			ptent = pte_mkclean(ptent);
> >  			ptent = pte_wrprotect(ptent);
> >  			set_pte_at(mm, addr, pte, ptent);
> > -			if (PageActive(page))
> > -				deactivate_page(page);
> >  			tlb_remove_tlb_entry(tlb, pte, addr);
> >  		}
> > +		mark_page_lazyfree(page);
> >  	}
> >  out:
> >  	if (nr_swap) {
> > diff --git a/mm/swap.c b/mm/swap.c
> > index c4910f1..69a7e9d 100644
> > --- a/mm/swap.c
> > +++ b/mm/swap.c
> > @@ -46,7 +46,7 @@ int page_cluster;
> >  static DEFINE_PER_CPU(struct pagevec, lru_add_pvec);
> >  static DEFINE_PER_CPU(struct pagevec, lru_rotate_pvecs);
> >  static DEFINE_PER_CPU(struct pagevec, lru_deactivate_file_pvecs);
> > -static DEFINE_PER_CPU(struct pagevec, lru_deactivate_pvecs);
> > +static DEFINE_PER_CPU(struct pagevec, lru_lazyfree_pvecs);
> >  #ifdef CONFIG_SMP
> >  static DEFINE_PER_CPU(struct pagevec, activate_page_pvecs);
> >  #endif
> > @@ -268,6 +268,11 @@ static void __activate_page(struct page *page, struct lruvec *lruvec,
> >  		int lru = page_lru_base_type(page);
> >  
> >  		del_page_from_lru_list(page, lruvec, lru);
> > +		if (page_is_lazyfree(page)) {
> > +			SetPageSwapBacked(page);
> > +			file = 0;
> 
> I don't see why you set file with 0. Could you explain the rationale?

We are moving the page back to active anonymous list, so I'd like to charge the
recent_scanned and recent_rotated to anonymous.

Thanks,
Shaohua

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ