lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <FF5B51C0-8D6E-47AB-82C8-020D5C522502@nvidia.com>
Date:   Fri, 21 Feb 2020 17:31:05 -0500
From:   Zi Yan <ziy@...dia.com>
To:     Rik van Riel <riel@...riel.com>
CC:     <linux-kernel@...r.kernel.org>, <riel@...com>,
        <kernel-team@...com>, <akpm@...ux-foundation.org>,
        <linux-mm@...ck.org>, <mhocko@...nel.org>, <vbabka@...e.cz>,
        <mgorman@...hsingularity.net>, <rientjes@...gle.com>,
        <aarcange@...hat.com>
Subject: Re: [PATCH 2/2] mm,thp,compaction,cma: allow THP migration for CMA
 allocations

On 21 Feb 2020, at 16:53, Rik van Riel wrote:

> The code to implement THP migrations already exists, and the code
> for CMA to clear out a region of memory already exists.
>
> Only a few small tweaks are needed to allow CMA to move THP memory
> when attempting an allocation from alloc_contig_range.
>
> With these changes, migrating THPs from a CMA area works when
> allocating a 1GB hugepage from CMA memory.
>
> Signed-off-by: Rik van Riel <riel@...riel.com>
> ---
>  mm/compaction.c | 16 +++++++++-------
>  mm/page_alloc.c |  6 ++++--
>  2 files changed, 13 insertions(+), 9 deletions(-)
>
> diff --git a/mm/compaction.c b/mm/compaction.c
> index 672d3c78c6ab..f3e05c91df62 100644
> --- a/mm/compaction.c
> +++ b/mm/compaction.c
> @@ -894,12 +894,12 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
>
>  		/*
>  		 * Regardless of being on LRU, compound pages such as THP and
> -		 * hugetlbfs are not to be compacted. We can potentially save
> -		 * a lot of iterations if we skip them at once. The check is
> -		 * racy, but we can consider only valid values and the only
> -		 * danger is skipping too much.
> +		 * hugetlbfs are not to be compacted most of the time. We can
> +		 * potentially save a lot of iterations if we skip them at
> +		 * once. The check is racy, but we can consider only valid
> +		 * values and the only danger is skipping too much.
>  		 */

Maybe add “we do want to move them when allocating contiguous memory using CMA” to help
people understand the context of using cc->alloc_contig?

> -		if (PageCompound(page)) {
> +		if (PageCompound(page) && !cc->alloc_contig) {
>  			const unsigned int order = compound_order(page);
>
>  			if (likely(order < MAX_ORDER))
> @@ -969,7 +969,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
>  			 * and it's on LRU. It can only be a THP so the order
>  			 * is safe to read and it's 0 for tail pages.
>  			 */
> -			if (unlikely(PageCompound(page))) {
> +			if (unlikely(PageCompound(page) && !cc->alloc_contig)) {
>  				low_pfn += compound_nr(page) - 1;
>  				goto isolate_fail;
>  			}
> @@ -981,7 +981,9 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
>  		if (__isolate_lru_page(page, isolate_mode) != 0)
>  			goto isolate_fail;
>
> -		VM_BUG_ON_PAGE(PageCompound(page), page);
> +		/* The whole page is taken off the LRU; skip the tail pages. */
> +		if (PageCompound(page))
> +			low_pfn += compound_nr(page) - 1;
>
>  		/* Successfully isolated */
>  		del_page_from_lru_list(page, lruvec, page_lru(page));
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index a36736812596..38c8ddfcecc8 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -8253,14 +8253,16 @@ struct page *has_unmovable_pages(struct zone *zone, struct page *page,
>
>  		/*
>  		 * Hugepages are not in LRU lists, but they're movable.
> +		 * THPs are on the LRU, but need to be counted as #small pages.
>  		 * We need not scan over tail pages because we don't
>  		 * handle each tail page individually in migration.
>  		 */
> -		if (PageHuge(page)) {
> +		if (PageTransHuge(page)) {
>  			struct page *head = compound_head(page);
>  			unsigned int skip_pages;
>
> -			if (!hugepage_migration_supported(page_hstate(head)))
> +			if (PageHuge(page) &&
> +			    !hugepage_migration_supported(page_hstate(head)))
>  				return page;
>
>  			skip_pages = compound_nr(head) - (page - head);
> -- 
> 2.24.1

Everything else looks good to me.

Reviewed-by: Zi Yan <ziy@...dia.com>


--
Best Regards,
Yan Zi

Download attachment "signature.asc" of type "application/pgp-signature" (855 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ