lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4591b38d-fdd0-e2e6-bf11-6e5669575736@suse.cz>
Date:   Wed, 1 Jul 2020 20:02:51 +0200
From:   Vlastimil Babka <vbabka@...e.cz>
To:     js1304@...il.com, Andrew Morton <akpm@...ux-foundation.org>
Cc:     linux-mm@...ck.org, linux-kernel@...r.kernel.org,
        Johannes Weiner <hannes@...xchg.org>,
        Michal Hocko <mhocko@...nel.org>,
        Hugh Dickins <hughd@...gle.com>,
        Minchan Kim <minchan@...nel.org>,
        Mel Gorman <mgorman@...hsingularity.net>, kernel-team@....com,
        Joonsoo Kim <iamjoonsoo.kim@....com>
Subject: Re: [PATCH v6 2/6] mm/vmscan: protect the workingset on anonymous LRU

On 6/17/20 7:26 AM, js1304@...il.com wrote:
> From: Joonsoo Kim <iamjoonsoo.kim@....com>

Hi, how about a more descriptive subject, such as

mm/vmscan: add new anonymous pages to inactive LRU list

> In current implementation, newly created or swap-in anonymous page
> is started on active list. Growing active list results in rebalancing
> active/inactive list so old pages on active list are demoted to inactive
> list. Hence, the page on active list isn't protected at all.
> 
> Following is an example of this situation.
> 
> Assume that 50 hot pages on active list. Numbers denote the number of
> pages on active/inactive list (active | inactive).
> 
> 1. 50 hot pages on active list
> 50(h) | 0
> 
> 2. workload: 50 newly created (used-once) pages
> 50(uo) | 50(h)
> 
> 3. workload: another 50 newly created (used-once) pages
> 50(uo) | 50(uo), swap-out 50(h)
> 
> This patch tries to fix this issue.
> Like as file LRU, newly created or swap-in anonymous pages will be
> inserted to the inactive list. They are promoted to active list if
> enough reference happens. This simple modification changes the above
> example as following.
> 
> 1. 50 hot pages on active list
> 50(h) | 0
> 
> 2. workload: 50 newly created (used-once) pages
> 50(h) | 50(uo)
> 
> 3. workload: another 50 newly created (used-once) pages
> 50(h) | 50(uo), swap-out 50(uo)
> 
> As you can see, hot pages on active list would be protected.
> 
> Note that, this implementation has a drawback that the page cannot
> be promoted and will be swapped-out if re-access interval is greater than
> the size of inactive list but less than the size of total(active+inactive).
> To solve this potential issue, following patch will apply workingset
> detection that is applied to file LRU some day before.

detection similar to the one that's already applied to file LRU.

> v6: Before this patch, all anon pages (inactive + active) are considered
> as workingset. However, with this patch, only active pages are considered
> as workingset. So, file refault formula which uses the number of all
> anon pages is changed to use only the number of active anon pages.

a "v6" note is more suitable for a diffstat area than commit log, but it's good
to mention this so drop the 'v6:'?

> Acked-by: Johannes Weiner <hannes@...xchg.org>
> Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@....com>

Acked-by: Vlastimil Babka <vbabka@...e.cz>

One more nit below.

> --- a/mm/swap.c
> +++ b/mm/swap.c
> @@ -476,23 +476,24 @@ void lru_cache_add(struct page *page)
>  EXPORT_SYMBOL(lru_cache_add);
>  
>  /**
> - * lru_cache_add_active_or_unevictable
> + * lru_cache_add_inactive_or_unevictable
>   * @page:  the page to be added to LRU
>   * @vma:   vma in which page is mapped for determining reclaimability
>   *
> - * Place @page on the active or unevictable LRU list, depending on its
> + * Place @page on the inactive or unevictable LRU list, depending on its
>   * evictability.  Note that if the page is not evictable, it goes
>   * directly back onto it's zone's unevictable list, it does NOT use a
>   * per cpu pagevec.
>   */
> -void lru_cache_add_active_or_unevictable(struct page *page,
> +void lru_cache_add_inactive_or_unevictable(struct page *page,
>  					 struct vm_area_struct *vma)
>  {
> +	bool unevictable;
> +
>  	VM_BUG_ON_PAGE(PageLRU(page), page);
>  
> -	if (likely((vma->vm_flags & (VM_LOCKED | VM_SPECIAL)) != VM_LOCKED))
> -		SetPageActive(page);
> -	else if (!TestSetPageMlocked(page)) {
> +	unevictable = (vma->vm_flags & (VM_LOCKED | VM_SPECIAL)) == VM_LOCKED;
> +	if (unevictable && !TestSetPageMlocked(page)) {

I guess this could be "if (unlikely(unevictable) && ..." to match the previous
if (likely(evictable)) else ...

>  		/*
>  		 * We use the irq-unsafe __mod_zone_page_stat because this
>  		 * counter is not modified from interrupt context, and the pte
> diff --git a/mm/swapfile.c b/mm/swapfile.c
> index c047789..38f6433 100644

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ