[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAHbLzkq+NBjwjSvU1fQe56nLf5mmGp65TH8hDpb66EFLENctKA@mail.gmail.com>
Date: Tue, 14 May 2024 15:28:12 -0600
From: Yang Shi <shy828301@...il.com>
To: Andrew Morton <akpm@...ux-foundation.org>
Cc: Miaohe Lin <linmiaohe@...wei.com>, nao.horiguchi@...il.com, xuyu@...ux.alibaba.com,
linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH -rc7] mm/huge_memory: mark huge_zero_page reserved
On Tue, May 14, 2024 at 3:14 PM Andrew Morton <akpm@...ux-foundation.org> wrote:
>
> On Sat, 11 May 2024 11:54:35 +0800 Miaohe Lin <linmiaohe@...wei.com> wrote:
>
> > When I did memory failure tests recently, below panic occurs:
> >
> > kernel BUG at include/linux/mm.h:1135!
> > invalid opcode: 0000 [#1] PREEMPT SMP NOPTI
> > CPU: 9 PID: 137 Comm: kswapd1 Not tainted 6.9.0-rc4-00491-gd5ce28f156fe-dirty #14
> >
> > ...
> >
> > --- a/mm/huge_memory.c
> > +++ b/mm/huge_memory.c
> > @@ -208,6 +208,7 @@ static bool get_huge_zero_page(void)
> > __free_pages(zero_page, compound_order(zero_page));
> > goto retry;
> > }
> > + __SetPageReserved(zero_page);
> > WRITE_ONCE(huge_zero_pfn, page_to_pfn(zero_page));
> >
> > /* We take additional reference here. It will be put back by shrinker */
> > @@ -260,6 +261,7 @@ static unsigned long shrink_huge_zero_page_scan(struct shrinker *shrink,
> > struct page *zero_page = xchg(&huge_zero_page, NULL);
> > BUG_ON(zero_page == NULL);
> > WRITE_ONCE(huge_zero_pfn, ~0UL);
> > + __ClearPageReserved(zero_page);
> > __free_pages(zero_page, compound_order(zero_page));
> > return HPAGE_PMD_NR;
> > }
>
> This causes a bit of a mess when staged ahead of mm-stable. So to
> avoid disruption I staged it behind mm-stable. This means that when
> the -stable maintainers try to merge it, they will ask for a fixed up
> version for older kernels so you can please just send them this
> version.
Can you please drop this from mm-unstable since both I and David
nack'ed a similar patch in another thread.
https://lore.kernel.org/linux-mm/20240511032801.1295023-1-linmiaohe@huawei.com/
Both patches actually do the same thing, just this one uses page, the
other one uses folio.
>
> To facilitate this I added the below adjustment:
>
> (btw, shouldn't get_huge_zero_page() and shrink_huge_zero_page_scan()
> be renamed to *_folio_*?)
>
>
> From: Andrew Morton <akpm@...ux-foundation.org>
> Subject: mm-huge_memory-mark-huge_zero_page-reserved-fix
> Date: Tue May 14 01:53:37 PM PDT 2024
>
> Update it for 5691753d73a2 ("mm: convert huge_zero_page to huge_zero_folio")
>
> Cc: Matthew Wilcox (Oracle) <willy@...radead.org>
> Cc: David Hildenbrand <david@...hat.com>
> Cc: Miaohe Lin <linmiaohe@...wei.com>
> Cc: Naoya Horiguchi <nao.horiguchi@...il.com>
> Cc: Xu Yu <xuyu@...ux.alibaba.com>
> Cc: Yang Shi <shy828301@...il.com>
> Signed-off-by: Andrew Morton <akpm@...ux-foundation.org>
> ---
>
> mm/huge_memory.c | 4 ++--
> 1 file changed, 2 insertions(+), 2 deletions(-)
>
> --- a/mm/huge_memory.c~mm-huge_memory-mark-huge_zero_page-reserved-fix
> +++ a/mm/huge_memory.c
> @@ -212,7 +212,7 @@ retry:
> folio_put(zero_folio);
> goto retry;
> }
> - __SetPageReserved(zero_page);
> + __folio_set_reserved(zero_folio);
> WRITE_ONCE(huge_zero_pfn, folio_pfn(zero_folio));
>
> /* We take additional reference here. It will be put back by shrinker */
> @@ -265,7 +265,7 @@ static unsigned long shrink_huge_zero_pa
> struct folio *zero_folio = xchg(&huge_zero_folio, NULL);
> BUG_ON(zero_folio == NULL);
> WRITE_ONCE(huge_zero_pfn, ~0UL);
> - __ClearPageReserved(zero_page);
> + __folio_clear_reserved(zero_folio);
> folio_put(zero_folio);
> return HPAGE_PMD_NR;
> }
> _
>
Powered by blists - more mailing lists