[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Yrnn+QS8JCMT/3JV@monkey>
Date: Mon, 27 Jun 2022 10:25:13 -0700
From: Mike Kravetz <mike.kravetz@...cle.com>
To: HORIGUCHI NAOYA(堀口 直也)
<naoya.horiguchi@....com>
Cc: Miaohe Lin <linmiaohe@...wei.com>,
Muchun Song <songmuchun@...edance.com>,
Naoya Horiguchi <nao.horiguchi@...il.com>,
Andrew Morton <akpm@...ux-foundation.org>,
David Hildenbrand <david@...hat.com>,
Liu Shixin <liushixin2@...wei.com>,
Yang Shi <shy828301@...il.com>,
Oscar Salvador <osalvador@...e.de>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Linux-MM <linux-mm@...ck.org>
Subject: Re: [PATCH v2 1/9] mm/hugetlb: remove checking hstate_is_gigantic()
in return_unused_surplus_pages()
On 06/27/22 06:02, HORIGUCHI NAOYA(堀口 直也) wrote:
> On Fri, Jun 24, 2022 at 12:11:07PM -0700, Mike Kravetz wrote:
> > On 06/24/22 08:34, HORIGUCHI NAOYA(堀口 直也) wrote:
> > > On Fri, Jun 24, 2022 at 04:15:19PM +0800, Miaohe Lin wrote:
> > > > On 2022/6/24 16:03, Muchun Song wrote:
> > > > > On Fri, Jun 24, 2022 at 10:25:48AM +0800, Miaohe Lin wrote:
> > > > >> On 2022/6/24 7:51, Naoya Horiguchi wrote:
> > > > >>> From: Naoya Horiguchi <naoya.horiguchi@....com>
> > > > >>
> > > > >> IIUC it might be better to do the below check:
> > > > >> /*
> > > > >> * Cannot return gigantic pages currently if runtime gigantic page
> > > > >> * allocation is not supported.
> > > > >> */
> > > > >> if (hstate_is_gigantic(h) && !gigantic_page_runtime_supported())
> > > > >> goto out;
> > > > >>
> > > > >
> > > > > The change looks good to me. However, the comments above is unnecessary
> > > > > since gigantic_page_runtime_supported() is straightforward.
> > > >
> > > > Agree. The comments can be removed.
> > >
> > > Thank you, both. Adding !gigantic_page_runtime_supported without comment
> > > makes sense to me.
> >
> > The change above makes sense to me. However, ...
> >
> > If we make the change above, will we have the same strange situation described
> > in the commit message when !gigantic_page_runtime_supported() is true?
> >
> > IIUC, !gigantic_page_runtime_supported implies that gigantic hugetlb
> > pages can not be allocated or freed at run time. They can only be
> > allocated at boot time. So, there should NEVER be surplus gigantic
> > pages if !gigantic_page_runtime_supported().
>
> I have the same understanding as the above.
>
> > To avoid this situation,
> > perhaps we should change set_max_huge_pages as follows (not tested)?
>
> The suggested diff looks clearer about what it does, so I'd like to take it
> in the next version. Then, what do we do on the "if (hstate_if_gigantic())"
> check in return_unused_surplus_pages in the original suggestion? Should it
> be kept as is, or removed, or checked with !gigantic_page_runtime_supported()?
> I guess that the new checks prevent calling return_unused_surplus_pages()
> during pool shrinking, so the check seems not necessary any more.
My first thought was to keep the check in return_unused_surplus_pages() as it
is called in other code paths. However, it SHOULD only try to free surplus
hugetlb pages. With the modifications to set_max_huge_pages we will not
have any surplus gigantic pages if !gigantic_page_runtime_supported, so
the check can be removed.
Also note that we never try to dynamically allocate surplus gigantic pages.
This also is left over from the time when we could not easily allocate a
gigantic page at runtime. It would not surprise me if someone found a use
case to ease this restriction in the future. Especially so if 1G THP support
is ever added. If this happens, the check would be necessary and I would
guess that it would not be added.
Sorry for thinking our loud!!! Although not necessary, it 'might' be a good
idea to leave the check because it would be overlooked if dynamic allocation
of gigantic surplus pages is ever added. I do not have a strong opinion.
P.S. This also reminds me that a similar check should be added to the
demote hugetlb code path. It would be bad if !gigantic_page_runtime_supported
and we demoted a gigantic page into numerous non-gigantic pages. I will
send a patch.
--
Mike Kravetz
Powered by blists - more mailing lists