lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <df83c62f-209f-b1fd-3a5c-c81c82cb2606@oracle.com>
Date:   Thu, 27 Feb 2020 15:41:39 -0800
From:   Mike Kravetz <mike.kravetz@...cle.com>
To:     Rik van Riel <riel@...riel.com>, linux-kernel@...r.kernel.org
Cc:     kernel-team@...com, akpm@...ux-foundation.org, linux-mm@...ck.org,
        mhocko@...nel.org, vbabka@...e.cz, mgorman@...hsingularity.net,
        rientjes@...gle.com, aarcange@...hat.com, ziy@...dia.com
Subject: Re: [PATCH v2 2/2] mm,thp,compaction,cma: allow THP migration for CMA
 allocations

On 2/27/20 1:32 PM, Rik van Riel wrote:
> The code to implement THP migrations already exists, and the code
> for CMA to clear out a region of memory already exists.
> 
> Only a few small tweaks are needed to allow CMA to move THP memory
> when attempting an allocation from alloc_contig_range.
> 
> With these changes, migrating THPs from a CMA area works when
> allocating a 1GB hugepage from CMA memory.
> 
> Signed-off-by: Rik van Riel <riel@...riel.com>
> Reviewed-by: Zi Yan <ziy@...dia.com>
> ---
>  mm/compaction.c | 22 +++++++++++++---------
>  mm/page_alloc.c |  9 +++++++--
>  2 files changed, 20 insertions(+), 11 deletions(-)
> 
> diff --git a/mm/compaction.c b/mm/compaction.c
> index 672d3c78c6ab..000ade085b89 100644
> --- a/mm/compaction.c
> +++ b/mm/compaction.c
> @@ -894,12 +894,13 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
>  
>  		/*
>  		 * Regardless of being on LRU, compound pages such as THP and
> -		 * hugetlbfs are not to be compacted. We can potentially save
> -		 * a lot of iterations if we skip them at once. The check is
> -		 * racy, but we can consider only valid values and the only
> -		 * danger is skipping too much.
> +		 * hugetlbfs are not to be compacted unless we are attempting
> +		 * an allocation much larger than the huge page size (eg CMA).
> +		 * We can potentially save a lot of iterations if we skip them
> +		 * at once. The check is racy, but we can consider only valid
> +		 * values and the only danger is skipping too much.
>  		 */
> -		if (PageCompound(page)) {
> +		if (PageCompound(page) && !cc->alloc_contig) {
>  			const unsigned int order = compound_order(page);
>  
>  			if (likely(order < MAX_ORDER))
> @@ -969,7 +970,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
>  			 * and it's on LRU. It can only be a THP so the order
>  			 * is safe to read and it's 0 for tail pages.
>  			 */
> -			if (unlikely(PageCompound(page))) {
> +			if (unlikely(PageCompound(page) && !cc->alloc_contig)) {
>  				low_pfn += compound_nr(page) - 1;
>  				goto isolate_fail;
>  			}
> @@ -981,12 +982,15 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
>  		if (__isolate_lru_page(page, isolate_mode) != 0)
>  			goto isolate_fail;
>  
> -		VM_BUG_ON_PAGE(PageCompound(page), page);
> +		/* The whole page is taken off the LRU; skip the tail pages. */
> +		if (PageCompound(page))
> +			low_pfn += compound_nr(page) - 1;
>  
>  		/* Successfully isolated */
>  		del_page_from_lru_list(page, lruvec, page_lru(page));
> -		inc_node_page_state(page,
> -				NR_ISOLATED_ANON + page_is_file_cache(page));
> +		mod_node_page_state(page_pgdat(page),
> +				NR_ISOLATED_ANON + page_is_file_cache(page),
> +				hpage_nr_pages(page));
>  
>  isolate_success:
>  		list_add(&page->lru, &cc->migratepages);
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index a36736812596..6257c849cc00 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -8253,14 +8253,19 @@ struct page *has_unmovable_pages(struct zone *zone, struct page *page,
>  
>  		/*
>  		 * Hugepages are not in LRU lists, but they're movable.
> +		 * THPs are on the LRU, but need to be counted as #small pages.
>  		 * We need not scan over tail pages because we don't
>  		 * handle each tail page individually in migration.
>  		 */
> -		if (PageHuge(page)) {
> +		if (PageHuge(page) || PageTransCompound(page)) {
>  			struct page *head = compound_head(page);
>  			unsigned int skip_pages;
>  
> -			if (!hugepage_migration_supported(page_hstate(head)))
> +			if (PageHuge(page) &&
> +			    !hugepage_migration_supported(page_hstate(head)))
> +				return page;
> +
> +			if (!PageLRU(head) && !__PageMovable(head))

Pretty sure this is going to be true for hugetlb pages.  So, this will change
behavior and make all hugetlb pages look unmovable.  Perhaps, only check this
condition for THP pages?
-- 
Mike Kravetz

>  				return page;
>  
>  			skip_pages = compound_nr(head) - (page - head);
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ