lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20200805204457.GB16406@hori.linux.bs1.fc.nec.co.jp>
Date:   Wed, 5 Aug 2020 20:44:58 +0000
From:   HORIGUCHI NAOYA(堀口 直也) 
        <naoya.horiguchi@....com>
To:     Qian Cai <cai@....pw>
CC:     "nao.horiguchi@...il.com" <nao.horiguchi@...il.com>,
        "linux-mm@...ck.org" <linux-mm@...ck.org>,
        "mhocko@...nel.org" <mhocko@...nel.org>,
        "akpm@...ux-foundation.org" <akpm@...ux-foundation.org>,
        "mike.kravetz@...cle.com" <mike.kravetz@...cle.com>,
        "osalvador@...e.de" <osalvador@...e.de>,
        "tony.luck@...el.com" <tony.luck@...el.com>,
        "david@...hat.com" <david@...hat.com>,
        "aneesh.kumar@...ux.vnet.ibm.com" <aneesh.kumar@...ux.vnet.ibm.com>,
        "zeil@...dex-team.ru" <zeil@...dex-team.ru>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v5 00/16] HWPOISON: soft offline rework

On Mon, Aug 03, 2020 at 09:49:42PM -0400, Qian Cai wrote:
> On Tue, Aug 04, 2020 at 01:16:45AM +0000, HORIGUCHI NAOYA(堀口 直也) wrote:
> > On Mon, Aug 03, 2020 at 03:07:09PM -0400, Qian Cai wrote:
> > > On Fri, Jul 31, 2020 at 12:20:56PM +0000, nao.horiguchi@...il.com wrote:
> > > > This patchset is the latest version of soft offline rework patchset
> > > > targetted for v5.9.
> > > > 
> > > > Main focus of this series is to stabilize soft offline.  Historically soft
> > > > offlined pages have suffered from racy conditions because PageHWPoison is
> > > > used to a little too aggressively, which (directly or indirectly) invades
> > > > other mm code which cares little about hwpoison.  This results in unexpected
> > > > behavior or kernel panic, which is very far from soft offline's "do not
> > > > disturb userspace or other kernel component" policy.
> > > > 
> > > > Main point of this change set is to contain target page "via buddy allocator",
> > > > where we first free the target page as we do for normal pages, and remove
> > > > from buddy only when we confirm that it reaches free list. There is surely
> > > > race window of page allocation, but that's fine because someone really want
> > > > that page and the page is still working, so soft offline can happily give up.
> > > > 
> > > > v4 from Oscar tries to handle the race around reallocation, but that part
> > > > seems still work in progress, so I decide to separate it for changes into
> > > > v5.9.  Thank you for your contribution, Oscar.
> > > > 
> > > > The issue reported by Qian Cai is fixed by patch 16/16.
> > > > 
> > > > This patchset is based on v5.8-rc7-mmotm-2020-07-27-18-18, but I applied
> > > > this series after reverting previous version.
> > > > Maybe https://github.com/Naoya-Horiguchi/linux/commits/soft-offline-rework.v5
> > > > shows what I did more precisely.
> > > > 
> > > > Any other comment/suggestion/help would be appreciated.
> > > 
> > > There is another issue with this patchset (with and without the patch [1]).
> > > 
> > > [1] https://lore.kernel.org/lkml/20200803133657.GA13307@hori.linux.bs1.fc.nec.co.jp/
> > > 
> > > Arm64 using 512M-size hugepages starts to fail allocations prematurely.
> > > 
> > > # ./random 1
> > > - start: migrate_huge_offline
> > > - use NUMA nodes 0,1.
> > > - mmap and free 2147483648 bytes hugepages on node 0
> > > - mmap and free 2147483648 bytes hugepages on node 1
> > > madvise: Cannot allocate memory
> > > 
> > > [  284.388061][ T3706] soft offline: 0x956000: hugepage isolation failed: 0, page count 2, type 17ffff80001000e (referenced|uptodate|dirty|head)
> > > [  284.400777][ T3706] Soft offlining pfn 0x8e000 at process virtual address 0xffff80000000
> > > [  284.893412][ T3706] Soft offlining pfn 0x8a000 at process virtual address 0xffff60000000
> > > [  284.901539][ T3706] soft offline: 0x8a000: hugepage isolation failed: 0, page count 2, type 7ffff80001000e (referenced|uptodate|dirty|head)
> > > [  284.914129][ T3706] Soft offlining pfn 0x8c000 at process virtual address 0xffff80000000
> > > [  285.433497][ T3706] Soft offlining pfn 0x88000 at process virtual address 0xffff60000000
> > > [  285.720377][ T3706] Soft offlining pfn 0x8a000 at process virtual address 0xffff80000000
> > > [  286.281620][ T3706] Soft offlining pfn 0xa000 at process virtual address 0xffff60000000
> > > [  286.290065][ T3706] soft offline: 0xa000: hugepage migration failed -12, type 7ffff80001000e (referenced|uptodate|dirty|head)
> > 
> > I think that this is due to the lack of contiguous memory.
> > This test program iterates soft offlining many times for hugepages,
> > so finally one page in every 512MB will be removed from buddy, then we
> > can't allocate hugepage any more even if we have enough free pages.
> > This is not good for heavy hugepage users, but that should be intended.
> > 
> > It seems that random.c calls madvise(MADV_SOFT_OFFLINE) for 2 hugepages,
> > and iterates it 1000 (==NR_LOOP) times, so if the system doesn't have
> > enough memory to cover the range of 2000 hugepages (1000GB in the Arm64
> > system), this ENOMEM should reproduce as expected.
> 
> Well, each iteration will mmap/munmap, so there should be no leaking. 
> 
> https://gitlab.com/cailca/linux-mm/-/blob/master/random.c#L376
> 
> It also seem to me madvise(MADV_SOFT_OFFLINE) does start to fragment memory
> somehow, because after this "madvise: Cannot allocate memory" happened, I
> immediately checked /proc/meminfo and then found no hugepage usage at all.
> 
> > 
> > > 
> > > Reverting this patchset and its dependency patchset [2] (reverting the
> > > dependency alone did not help) fixed it,
> > 
> > But it's still not clear to me why this was not visible before this
> > patchset, so I need more check for it.

I've reproduced ENOMEM with v5.8 (without this patchset) simply by using
VM with small memory (4GB). So this specific error seems not to be caused
by this series.

Thanks,
Naoya Horiguchi

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ