lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200501070424.a5uugk7am2yzzx4v@box>
Date:   Fri, 1 May 2020 10:04:24 +0300
From:   "Kirill A. Shutemov" <kirill@...temov.name>
To:     Yang Shi <yang.shi@...ux.alibaba.com>
Cc:     kirill.shutemov@...ux.intel.com, hughd@...gle.com,
        aarcange@...hat.com, akpm@...ux-foundation.org, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org
Subject: Re: [v2 linux-next PATCH 2/2] mm: khugepaged: don't have to put
 being freed page back to lru

On Fri, May 01, 2020 at 04:41:19AM +0800, Yang Shi wrote:
> When khugepaged successfully isolated and copied data from old page to
> collapsed THP, the old page is about to be freed if its last mapcount
> is gone.  So putting the page back to lru sounds not that productive in
> this case since the page might be isolated by vmscan but it can't be
> reclaimed by vmscan since it can't be unmapped by try_to_unmap() at all.
> 
> Actually if khugepaged is the last user of this page so it can be freed
> directly.  So, clearing active and unevictable flags, unlocking and
> dropping refcount from isolate instead of calling putback_lru_page().

Any reason putback_lru_page() cannot do it internally? I mean if it is
page_count() == 1, free the page.
> 
> Cc: Kirill A. Shutemov <kirill.shutemov@...ux.intel.com>
> Cc: Hugh Dickins <hughd@...gle.com>
> Cc: Andrea Arcangeli <aarcange@...hat.com>
> Signed-off-by: Yang Shi <yang.shi@...ux.alibaba.com>
> ---
> v2: Check mapcount and skip putback lru if the last mapcount is gone
> 
>  mm/khugepaged.c | 20 ++++++++++++++------
>  1 file changed, 14 insertions(+), 6 deletions(-)
> 
> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
> index 0c8d30b..1fdd677 100644
> --- a/mm/khugepaged.c
> +++ b/mm/khugepaged.c
> @@ -559,10 +559,18 @@ void __khugepaged_exit(struct mm_struct *mm)
>  static void release_pte_page(struct page *page)
>  {
>  	mod_node_page_state(page_pgdat(page),
> -			NR_ISOLATED_ANON + page_is_file_lru(page),
> -			-compound_nr(page));
> -	unlock_page(page);
> -	putback_lru_page(page);
> +		NR_ISOLATED_ANON + page_is_file_lru(page), -compound_nr(page));
> +
> +	if (total_mapcount(page)) {
> +		unlock_page(page);
> +		putback_lru_page(page);
> +	} else {
> +		ClearPageActive(page);
> +		ClearPageUnevictable(page);
> +		unlock_page(page);
> +		/* Drop refcount from isolate */
> +		put_page(page);
> +	}
>  }
>  
>  static void release_pte_pages(pte_t *pte, pte_t *_pte,
> @@ -771,8 +779,6 @@ static void __collapse_huge_page_copy(pte_t *pte, struct page *page,
>  		} else {
>  			src_page = pte_page(pteval);
>  			copy_user_highpage(page, src_page, address, vma);
> -			if (!PageCompound(src_page))
> -				release_pte_page(src_page);
>  			/*
>  			 * ptl mostly unnecessary, but preempt has to
>  			 * be disabled to update the per-cpu stats
> @@ -786,6 +792,8 @@ static void __collapse_huge_page_copy(pte_t *pte, struct page *page,
>  			pte_clear(vma->vm_mm, address, _pte);
>  			page_remove_rmap(src_page, false);
>  			spin_unlock(ptl);
> +			if (!PageCompound(src_page))
> +				release_pte_page(src_page);
>  			free_page_and_swap_cache(src_page);
>  		}
>  	}
> -- 
> 1.8.3.1
> 
> 

-- 
 Kirill A. Shutemov

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ