[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YvKNNSEWd1va4jo4@monkey>
Date: Tue, 9 Aug 2022 09:37:09 -0700
From: Mike Kravetz <mike.kravetz@...cle.com>
To: Miaohe Lin <linmiaohe@...wei.com>
Cc: Muchun Song <songmuchun@...edance.com>,
Joao Martins <joao.m.martins@...cle.com>,
Matthew Wilcox <willy@...radead.org>,
Michal Hocko <mhocko@...e.com>, Peter Xu <peterx@...hat.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Linux-MM <linux-mm@...ck.org>,
linux-kernel <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] hugetlb: freeze allocated pages before creating hugetlb
pages
On 08/09/22 10:48, Miaohe Lin wrote:
> On 2022/8/9 5:28, Mike Kravetz wrote:
<snip>
> > There have been proposals to change at least the buddy allocator to
> > return frozen pages as described at [3]. If such a change is made, it
> > can be employed by the hugetlb code. However, as mentioned above
> > hugetlb uses several low level allocators so each would need to be
> > modified to return frozen pages. For now, we can manually freeze the
> > returned pages. This is done in two places:
> > 1) alloc_buddy_huge_page, only the returned head page is ref counted.
> > We freeze the head page, retrying once in the VERY rare case where
> > there may be an inflated ref count.
> > 2) prep_compound_gigantic_page, for gigantic pages the current code
> > freezes all pages except the head page. New code will simply freeze
> > the head page as well.
> >
<snip>
> > diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> > index 28516881a1b2..6b90d85d545b 100644
> > --- a/mm/hugetlb.c
> > +++ b/mm/hugetlb.c
> > @@ -1769,13 +1769,12 @@ static bool __prep_compound_gigantic_page(struct page *page, unsigned int order,
> > {
> > int i, j;
> > int nr_pages = 1 << order;
> > - struct page *p = page + 1;
> > + struct page *p = page;
> >
> > /* we rely on prep_new_huge_page to set the destructor */
> > set_compound_order(page, order);
> > - __ClearPageReserved(page);
> > __SetPageHead(page);
> > - for (i = 1; i < nr_pages; i++, p = mem_map_next(p, page, i)) {
> > + for (i = 0; i < nr_pages; i++, p = mem_map_next(p, page, i)) {
> > /*
> > * For gigantic hugepages allocated through bootmem at
> > * boot, it's safer to be consistent with the not-gigantic
> > @@ -1814,7 +1813,8 @@ static bool __prep_compound_gigantic_page(struct page *page, unsigned int order,
> > } else {
> > VM_BUG_ON_PAGE(page_count(p), p);
> > }
> > - set_compound_head(p, page);
> > + if (i != 0)
> > + set_compound_head(p, page);
>
> It seems we forget to unfreeze the head page in out_error path. If unexpected inflated ref count occurs,
> the ref count of head page will become negative in free_gigantic_page?
Yes, thank you! I forgot to modify the error path to also fix up the
head page.
--
Mike Kravetz
Powered by blists - more mailing lists