lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 27 Apr 2022 12:20:50 +0000
From:   HORIGUCHI NAOYA(堀口 直也) 
        <naoya.horiguchi@....com>
To:     David Hildenbrand <david@...hat.com>
CC:     Naoya Horiguchi <naoya.horiguchi@...ux.dev>,
        "linux-mm@...ck.org" <linux-mm@...ck.org>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Miaohe Lin <linmiaohe@...wei.com>,
        Mike Kravetz <mike.kravetz@...cle.com>,
        Yang Shi <shy828301@...il.com>,
        Oscar Salvador <osalvador@...e.de>,
        Muchun Song <songmuchun@...edance.com>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [RFC PATCH v1 0/4] mm, hwpoison: improve handling workload
 related to hugetlb and memory_hotplug

On Wed, Apr 27, 2022 at 12:48:16PM +0200, David Hildenbrand wrote:
> On 27.04.22 06:28, Naoya Horiguchi wrote:
> > Hi,
> > 
> > This patchset addresses some issues on the workload related to hwpoison,
> > hugetlb, and memory_hotplug.  The problem in memory hotremove reported by
> > Miaohe Lin [1] is mentioned in 2/4.  This patch depends on "storing raw
> > error info" functionality provided by 1/4. This patch also provide delayed
> > dissolve function too.
> > 
> > Patch 3/4 is to adjust unpoison to new semantics of HPageMigratable for
> > hwpoisoned hugepage. And 4/4 is the fix for the inconsistent counter issue.
> > 
> > [1] https://lore.kernel.org/linux-mm/20220421135129.19767-1-linmiaohe@huawei.com/
> > 
> > Please let me know if you have any suggestions and comments.
> > 
> 
> Hi,
> 
> I raised some time ago already that I don't quite see the value of
> allowing memory offlining with poisened pages.
> 
> 1) It overcomplicates the offlining code and seems to be partially
>    broken

Yes, the current code has rooms of improvement.  Maybe page refcount
of hwpoisoned pages is one of them.

> 2) It happens rarely (ever?), so do we even care?

I'm not certain of the rarity.  Some cloud service providers who maintain
lots of servers may care?

> 3) Once the memory is offline, we can re-online it and lost HWPoison.
>    The memory can be happily used.
> 
> 3) can happen easily if our DIMM consists of multiple memory blocks and
> offlining of some memory block fails -> we'll re-online all already
> offlined ones. We'll happily reuse previously HWPoisoned pages, which
> feels more dangerous to me then just leaving the DIMM around (and
> eventually hwpoisoning all pages on it such that it won't get used
> anymore?).

I see. This scenario can often happen.

> 
> So maybe we should just fail offlining once we stumble over a hwpoisoned
> page?

That could be one choice.

Maybe another is like this: offlining can succeed but HWPoison flags are
kept over offline-reonline operations.  If the system noticed that the
re-onlined blocks are backed by the original DIMMs or NUMA nodes, then the
saved HWPoison flags are still effective, so keep using them.  If the
re-onlined blocks are backed by replaced DIMMs/NUMA nodes, then we can clear
all HWPoison flags associated with replaced physical address range.  This
can be done automatically in re-onlining if there's a way for kernel to know
whether DIMM/NUMA nodes are replaced with new ones.  But if there isn't,
system applications have to check the HW and explicitly reset the HWPoison
flags.

> 
> Yes, we would disallow removing a semi-broken DIMM from the system that
> was onlined MOVABLE. I wonder if we really need that and how often it
> happens in real life. Most systems I am aware of don't allow for
> replacing individual DIMMs, but only complete NUMA nodes. Hm.

Yes, maybe most servers do not provide direct physical access to DIMMs.

Thanks,
Naoya Horiguchi

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ