lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 5 Feb 2021 12:54:03 +0100
From:   Oscar Salvador <osalvador@...e.de>
To:     Muchun Song <songmuchun@...edance.com>
Cc:     corbet@....net, mike.kravetz@...cle.com, tglx@...utronix.de,
        mingo@...hat.com, bp@...en8.de, x86@...nel.org, hpa@...or.com,
        dave.hansen@...ux.intel.com, luto@...nel.org, peterz@...radead.org,
        viro@...iv.linux.org.uk, akpm@...ux-foundation.org,
        paulmck@...nel.org, mchehab+huawei@...nel.org,
        pawan.kumar.gupta@...ux.intel.com, rdunlap@...radead.org,
        oneukum@...e.com, anshuman.khandual@....com, jroedel@...e.de,
        almasrymina@...gle.com, rientjes@...gle.com, willy@...radead.org,
        mhocko@...e.com, song.bao.hua@...ilicon.com, david@...hat.com,
        naoya.horiguchi@....com, duanxiongchun@...edance.com,
        linux-doc@...r.kernel.org, linux-kernel@...r.kernel.org,
        linux-mm@...ck.org, linux-fsdevel@...r.kernel.org
Subject: Re: [PATCH v14 4/8] mm: hugetlb: alloc the vmemmap pages associated
 with each HugeTLB page

On Thu, Feb 04, 2021 at 11:50:39AM +0800, Muchun Song wrote:
> When we free a HugeTLB page to the buddy allocator, we should allocate the
> vmemmap pages associated with it. But we may cannot allocate vmemmap pages
> when the system is under memory pressure, in this case, we just refuse to
> free the HugeTLB page instead of looping forever trying to allocate the
> pages.
> 
> Signed-off-by: Muchun Song <songmuchun@...edance.com>

[...]

> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index 4cfca27c6d32..5518283aa667 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -1397,16 +1397,26 @@ static void __free_huge_page(struct page *page)
>  		h->resv_huge_pages++;
>  
>  	if (HPageTemporary(page)) {
> -		list_del(&page->lru);
>  		ClearHPageTemporary(page);
> +
> +		if (alloc_huge_page_vmemmap(h, page, GFP_ATOMIC)) {
> +			h->surplus_huge_pages++;
> +			h->surplus_huge_pages_node[nid]++;
> +			goto enqueue;
> +		}
> +		list_del(&page->lru);
>  		update_and_free_page(h, page);
>  	} else if (h->surplus_huge_pages_node[nid]) {
> +		if (alloc_huge_page_vmemmap(h, page, GFP_ATOMIC))
> +			goto enqueue;
> +
>  		/* remove the page from active list */
>  		list_del(&page->lru);
>  		update_and_free_page(h, page);
>  		h->surplus_huge_pages--;
>  		h->surplus_huge_pages_node[nid]--;
>  	} else {
> +enqueue:
>  		arch_clear_hugepage_flags(page);
>  		enqueue_huge_page(h, page);

Ok, we just keep them in the pool in case we fail to allocate.


> diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c
> index ddd872ab6180..0bd6b8d7282d 100644
> --- a/mm/hugetlb_vmemmap.c
> +++ b/mm/hugetlb_vmemmap.c
> @@ -169,6 +169,8 @@
>   * (last) level. So this type of HugeTLB page can be optimized only when its
>   * size of the struct page structs is greater than 2 pages.

[...]

> +int alloc_huge_page_vmemmap(struct hstate *h, struct page *head, gfp_t gfp_mask)
> +{
> +	int ret;
> +	unsigned long vmemmap_addr = (unsigned long)head;
> +	unsigned long vmemmap_end, vmemmap_reuse;
> +
> +	if (!free_vmemmap_pages_per_hpage(h))
> +		return 0;
> +
> +	vmemmap_addr += RESERVE_VMEMMAP_SIZE;
> +	vmemmap_end = vmemmap_addr + free_vmemmap_pages_size_per_hpage(h);
> +	vmemmap_reuse = vmemmap_addr - PAGE_SIZE;
> +
> +	/*
> +	 * The pages which the vmemmap virtual address range [@vmemmap_addr,
> +	 * @vmemmap_end) are mapped to are freed to the buddy allocator, and
> +	 * the range is mapped to the page which @vmemmap_reuse is mapped to.
> +	 * When a HugeTLB page is freed to the buddy allocator, previously
> +	 * discarded vmemmap pages must be allocated and remapping.
> +	 */
> +	ret = vmemmap_remap_alloc(vmemmap_addr, vmemmap_end, vmemmap_reuse,
> +				  gfp_mask | __GFP_NOWARN | __GFP_THISNODE);

Why don't you set all the GFP flags here?

vmemmap_remap_alloc(vmemmap_addr, vmemmap_end, vmemmap_reuse, GFP_ATOMIC|
                    __GFP_NOWARN | __GFP_THISNODE) ?

> diff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c
> index 50c1dc00b686..277eb43aebd5 100644
> --- a/mm/sparse-vmemmap.c
> +++ b/mm/sparse-vmemmap.c

[...]

> +static int alloc_vmemmap_page_list(unsigned long start, unsigned long end,
> +				   gfp_t gfp_mask, struct list_head *list)

I think it would make more sense for this function to get the nid and the
nr_pages to allocate directly.

> +{
> +	unsigned long addr;
> +	int nid = page_to_nid((const void *)start);

Uh, that void is a bit ugly. page_to_nid(struct page *)start).
Do not need the const either.

> +	struct page *page, *next;
> +
> +	for (addr = start; addr < end; addr += PAGE_SIZE) {
> +		page = alloc_pages_node(nid, gfp_mask, 0);
> +		if (!page)
> +			goto out;
> +		list_add_tail(&page->lru, list);
> +	}

and replace this by while(--nr_pages) etc.

I did not really go in depth, but looks good to me, and much more simply
overall.

The only thing I am not sure about is the use of GFP_ATOMIC.
It has been raised before than when we are close to OOM, the user might want
to try to free up some memory by dissolving free_huge_pages, and so we might
want to dip in the reserves.

Given the fact that we are prepared to fail, and that we do not retry, I would
rather use GFP_KERNEL than to have X pages atomically allocated and then realize
we need to drop them on the ground because we cannot go further at some point.
I think those reserves would be better off used by someone else in that
situation.

But this is just my thoughs, and given the fact that there seems to be a consensus
of susing GFP_ATOMIC.

-- 
Oscar Salvador
SUSE L3

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ