[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200811031139.GA7145@hori.linux.bs1.fc.nec.co.jp>
Date: Tue, 11 Aug 2020 03:11:40 +0000
From: HORIGUCHI NAOYA(堀口 直也)
<naoya.horiguchi@....com>
To: Qian Cai <cai@....pw>
CC: "nao.horiguchi@...il.com" <nao.horiguchi@...il.com>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
"mhocko@...nel.org" <mhocko@...nel.org>,
"akpm@...ux-foundation.org" <akpm@...ux-foundation.org>,
"mike.kravetz@...cle.com" <mike.kravetz@...cle.com>,
"osalvador@...e.de" <osalvador@...e.de>,
"tony.luck@...el.com" <tony.luck@...el.com>,
"david@...hat.com" <david@...hat.com>,
"aneesh.kumar@...ux.vnet.ibm.com" <aneesh.kumar@...ux.vnet.ibm.com>,
"zeil@...dex-team.ru" <zeil@...dex-team.ru>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"linux-arm-kernel@...ts.infradead.org"
<linux-arm-kernel@...ts.infradead.org>,
"catalin.marinas@....com" <catalin.marinas@....com>,
"will@...nel.org" <will@...nel.org>
Subject: Re: [PATCH v6 00/12] HWPOISON: soft offline rework
On Mon, Aug 10, 2020 at 11:22:55AM -0400, Qian Cai wrote:
> On Thu, Aug 06, 2020 at 06:49:11PM +0000, nao.horiguchi@...il.com wrote:
> > Hi,
> >
> > This patchset is the latest version of soft offline rework patchset
> > targetted for v5.9.
> >
> > Since v5, I dropped some patches which tweak refcount handling in
> > madvise_inject_error() to avoid the "unknown refcount page" error.
> > I don't confirm the fix (that didn't reproduce with v5 in my environment),
> > but this change surely call soft_offline_page() after holding refcount,
> > so the error should not happen any more.
>
> With this patchset, arm64 is still suffering from premature 512M-size hugepages
> allocation failures.
>
> # git clone https://gitlab.com/cailca/linux-mm
> # cd linux-mm; make
> # ./random 1
> - start: migrate_huge_offline
> - use NUMA nodes 0,1.
> - mmap and free 2147483648 bytes hugepages on node 0
> - mmap and free 2147483648 bytes hugepages on node 1
> madvise: Cannot allocate memory
>
> [ 292.456538][ T3685] soft offline: 0x8a000: hugepage isolation failed: 0, page count 2, type 7ffff80001000e (referenced|uptodate|dirty|head)
> [ 292.469113][ T3685] Soft offlining pfn 0x8c000 at process virtual address 0xffff60000000
> [ 292.983855][ T3685] Soft offlining pfn 0x88000 at process virtual address 0xffff40000000
> [ 293.271369][ T3685] Soft offlining pfn 0x8a000 at process virtual address 0xffff60000000
> [ 293.834030][ T3685] Soft offlining pfn 0xa000 at process virtual address 0xffff40000000
> [ 293.851378][ T3685] soft offline: 0xa000: hugepage migration failed -12, type 7ffff80001000e (referenced|uptodate|dirty|head)
>
> The fresh-booted system still had 40G+ memory free before running the test.
As I commented over v5, this failure is expected and it doesn't mean kernel
issue. Once we successfully soft offline a hugepage, the memory range
covering the hugepage will never participate in hugepage because one of the
subpages is removed from buddy. So if you iterate soft offlining hugepages,
all memory range are "holed" finally, and no hugepage will be available in
the system.
Please fix your test program to properly determine nubmer of loop (NR_LOOP)
so that you can assume that you can always allocate hugepage during testing.
For example, if you can use 40G memory and hugepage size is 512MB, NR_LOOP
should not be larger than 80.
>
> Reverting the following commits allowed the test to run succesfully over and over again.
>
> "mm, hwpoison: remove recalculating hpage"
> "mm,hwpoison-inject: don't pin for hwpoison_filter"
> "mm,hwpoison: Un-export get_hwpoison_page and make it static"
> "mm,hwpoison: kill put_hwpoison_page"
> "mm,hwpoison: unify THP handling for hard and soft offline"
> "mm,hwpoison: rework soft offline for free pages"
> "mm,hwpoison: rework soft offline for in-use pages"
> "mm,hwpoison: refactor soft_offline_huge_page and __soft_offline_page"
I'm still not sure why the test succeeded by reverting these because
current mainline kernel provides similar mechanism to prevent reuse of
soft offlined page. So this success seems to me something suspicious.
To investigate more, I want to have additional info about the page states
of the relevant pages after soft offlining. Could you collect it by the
following steps?
- modify random.c not to run hotplug_memory() in migrate_huge_hotplug_memory(),
- compile it and run "./random 1" once,
- to collect page state with hwpoisoned pages, run "./page-types -Nlr -b hwpoison",
where page-types is available under tools/vm in kernel source tree.
- choose a few pfns of soft offlined pages from kernel message
"Soft offlining pfn ...", and run "./page-types -Nlr -a <pfn>".
Thanks,
Naoya Horiguchi
>
> i.e., it is not enough to only revert,
>
> mm,hwpoison: double-check page count in __get_any_page()
> mm,hwpoison: introduce MF_MSG_UNSPLIT_THP
> mm,hwpoison: return 0 if the page is already poisoned in soft-offline
>
Powered by blists - more mailing lists