lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 15 Jul 2020 10:33:51 +0200
From:   Michal Hocko <mhocko@...nel.org>
To:     js1304@...il.com
Cc:     Andrew Morton <akpm@...ux-foundation.org>, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org, kernel-team@....com,
        Vlastimil Babka <vbabka@...e.cz>,
        Christoph Hellwig <hch@...radead.org>,
        Roman Gushchin <guro@...com>,
        Mike Kravetz <mike.kravetz@...cle.com>,
        Naoya Horiguchi <n-horiguchi@...jp.nec.com>,
        "Aneesh Kumar K . V" <aneesh.kumar@...ux.ibm.com>,
        Joonsoo Kim <iamjoonsoo.kim@....com>
Subject: Re: [PATCH 3/4] mm/hugetlb: make hugetlb migration callback CMA aware

On Wed 15-07-20 14:05:28, Joonsoo Kim wrote:
> From: Joonsoo Kim <iamjoonsoo.kim@....com>
> 
> new_non_cma_page() in gup.c requires to allocate the new page that is not
> on the CMA area. new_non_cma_page() implements it by using allocation
> scope APIs.
> 
> However, there is a work-around for hugetlb. Normal hugetlb page
> allocation API for migration is alloc_huge_page_nodemask(). It consists
> of two steps. First is dequeing from the pool. Second is, if there is no
> available page on the queue, allocating by using the page allocator.
> 
> new_non_cma_page() can't use this API since first step (deque) isn't
> aware of scope API to exclude CMA area. So, new_non_cma_page() exports
> hugetlb internal function for the second step, alloc_migrate_huge_page(),
> to global scope and uses it directly. This is suboptimal since hugetlb
> pages on the queue cannot be utilized.
> 
> This patch tries to fix this situation by making the deque function on
> hugetlb CMA aware. In the deque function, CMA memory is skipped if
> PF_MEMALLOC_NOCMA flag is found.

Now that this is in sync with the global case I do not have any
objections.

> Acked-by: Mike Kravetz <mike.kravetz@...cle.com>
> Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@....com>

Acked-by: Michal Hocko <mhocko@...e.com>

Minor nit below

[...]
> @@ -1036,10 +1037,16 @@ static void enqueue_huge_page(struct hstate *h, struct page *page)
>  static struct page *dequeue_huge_page_node_exact(struct hstate *h, int nid)
>  {
>  	struct page *page;
> +	bool nocma = !!(READ_ONCE(current->flags) & PF_MEMALLOC_NOCMA);

READ_ONCE is not really needed because current->flags are always set on
the current so no race is possible.

> +
> +	list_for_each_entry(page, &h->hugepage_freelists[nid], lru) {
> +		if (nocma && is_migrate_cma_page(page))
> +			continue;
>  
> -	list_for_each_entry(page, &h->hugepage_freelists[nid], lru)
>  		if (!PageHWPoison(page))
>  			break;
> +	}
> +
>  	/*
>  	 * if 'non-isolated free hugepage' not found on the list,
>  	 * the allocation fails.
> @@ -1928,7 +1935,7 @@ static struct page *alloc_surplus_huge_page(struct hstate *h, gfp_t gfp_mask,
>  	return page;
>  }
>  
> -struct page *alloc_migrate_huge_page(struct hstate *h, gfp_t gfp_mask,
> +static struct page *alloc_migrate_huge_page(struct hstate *h, gfp_t gfp_mask,
>  				     int nid, nodemask_t *nmask)
>  {
>  	struct page *page;
> -- 
> 2.7.4

-- 
Michal Hocko
SUSE Labs

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ