lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <119542cd-939f-3185-1d51-a177d4da1dff@huawei.com>
Date:   Tue, 9 Aug 2022 10:48:53 +0800
From:   Miaohe Lin <linmiaohe@...wei.com>
To:     Mike Kravetz <mike.kravetz@...cle.com>
CC:     Muchun Song <songmuchun@...edance.com>,
        Joao Martins <joao.m.martins@...cle.com>,
        Matthew Wilcox <willy@...radead.org>,
        Michal Hocko <mhocko@...e.com>, Peter Xu <peterx@...hat.com>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Linux-MM <linux-mm@...ck.org>,
        linux-kernel <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] hugetlb: freeze allocated pages before creating hugetlb
 pages

On 2022/8/9 5:28, Mike Kravetz wrote:
> When creating hugetlb pages, the hugetlb code must first allocate
> contiguous pages from a low level allocator such as buddy, cma or
> memblock.  The pages returned from these low level allocators are
> ref counted.  This creates potential issues with other code taking
> speculative references on these pages before they can be transformed to
> a hugetlb page.  This issue has been addressed with methods and code
> such as that provided in [1].
> 
> Recent discussions about vmemmap freeing [2] have indicated that it
> would be beneficial to freeze all sub pages, including the head page
> of pages returned from low level allocators before converting to a
> hugetlb page.  This helps avoid races if want to replace the page
> containing vmemmap for the head page.
> 
> There have been proposals to change at least the buddy allocator to
> return frozen pages as described at [3].  If such a change is made, it
> can be employed by the hugetlb code.  However, as mentioned above
> hugetlb uses several low level allocators so each would need to be
> modified to return frozen pages.  For now, we can manually freeze the
> returned pages.  This is done in two places:
> 1) alloc_buddy_huge_page, only the returned head page is ref counted.
>    We freeze the head page, retrying once in the VERY rare case where
>    there may be an inflated ref count.
> 2) prep_compound_gigantic_page, for gigantic pages the current code
>    freezes all pages except the head page.  New code will simply freeze
>    the head page as well.
> 
> In a few other places, code checks for inflated ref counts on newly
> allocated hugetlb pages.  With the modifications to freeze after
> allocating, this code can be removed.
> 
> After hugetlb pages are freshly allocated, they are often added to the
> hugetlb free lists.  Since these pages were previously ref counted, this
> was done via put_page() which would end up calling the hugetlb
> destructor: free_huge_page.  With changes to freeze pages, we simply
> call free_huge_page directly to add the pages to the free list.
> 
> In a few other places, freshly allocated hugetlb pages were immediately
> put into use, and the expectation was they were already ref counted.  In
> these cases, we must manually ref count the page.
> 
> [1] https://lore.kernel.org/linux-mm/20210622021423.154662-3-mike.kravetz@oracle.com/
> [2] https://lore.kernel.org/linux-mm/20220802180309.19340-1-joao.m.martins@oracle.com/
> [3] https://lore.kernel.org/linux-mm/20220531150611.1303156-1-willy@infradead.org/
> 
> Signed-off-by: Mike Kravetz <mike.kravetz@...cle.com>
> ---
>  mm/hugetlb.c | 97 +++++++++++++++++++---------------------------------
>  1 file changed, 35 insertions(+), 62 deletions(-)
> 
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index 28516881a1b2..6b90d85d545b 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -1769,13 +1769,12 @@ static bool __prep_compound_gigantic_page(struct page *page, unsigned int order,
>  {
>  	int i, j;
>  	int nr_pages = 1 << order;
> -	struct page *p = page + 1;
> +	struct page *p = page;
>  
>  	/* we rely on prep_new_huge_page to set the destructor */
>  	set_compound_order(page, order);
> -	__ClearPageReserved(page);
>  	__SetPageHead(page);
> -	for (i = 1; i < nr_pages; i++, p = mem_map_next(p, page, i)) {
> +	for (i = 0; i < nr_pages; i++, p = mem_map_next(p, page, i)) {
>  		/*
>  		 * For gigantic hugepages allocated through bootmem at
>  		 * boot, it's safer to be consistent with the not-gigantic
> @@ -1814,7 +1813,8 @@ static bool __prep_compound_gigantic_page(struct page *page, unsigned int order,
>  		} else {
>  			VM_BUG_ON_PAGE(page_count(p), p);
>  		}
> -		set_compound_head(p, page);
> +		if (i != 0)
> +			set_compound_head(p, page);

It seems we forget to unfreeze the head page in out_error path. If unexpected inflated ref count occurs,
the ref count of head page will become negative in free_gigantic_page?

Thanks for your patch, Mike. I hope this will help solve the races with memory failure. ;) And I will take
a more close review when I have enough time.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ