lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20220512045028.GB235456@hori.linux.bs1.fc.nec.co.jp>
Date:   Thu, 12 May 2022 04:50:28 +0000
From:   HORIGUCHI NAOYA(堀口 直也) 
        <naoya.horiguchi@....com>
To:     Mike Kravetz <mike.kravetz@...cle.com>
CC:     Miaohe Lin <linmiaohe@...wei.com>,
        Naoya Horiguchi <naoya.horiguchi@...ux.dev>,
        Andrew Morton <akpm@...ux-foundation.org>,
        zhenwei pi <pizhenwei@...edance.com>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        Linux-MM <linux-mm@...ck.org>
Subject: Re: [PATCH v1] mm,hwpoison: set PG_hwpoison for busy hugetlb pages

On Wed, May 11, 2022 at 08:06:47PM -0700, Mike Kravetz wrote:
> On 5/11/22 19:54, Miaohe Lin wrote:
> > On 2022/5/12 2:35, Mike Kravetz wrote:
> >> On 5/11/22 08:19, Naoya Horiguchi wrote:
> >>> From: Naoya Horiguchi <naoya.horiguchi@....com>
> >>>
> >>> If memory_failure() fails to grab page refcount on a hugetlb page
> >>> because it's busy, it returns without setting PG_hwpoison on it.
> >>> This not only loses a chance of error containment, but breaks the rule
> >>> that action_result() should be called only when memory_failure() do
> >>> any of handling work (even if that's just setting PG_hwpoison).
> >>> This inconsistency could harm code maintainability.
> >>>
> >>> So set PG_hwpoison and call hugetlb_set_page_hwpoison() for such a case.
> > 
> > I'm sorry but where is hugetlb_set_page_hwpoison() defined and used ? I can't find it.
> > 
> >>>
> >>> Fixes: 405ce051236c ("mm/hwpoison: fix race between hugetlb free/demotion and memory_failure_hugetlb()")
> >>> Signed-off-by: Naoya Horiguchi <naoya.horiguchi@....com>
> >>> ---
> >>>  include/linux/mm.h  | 1 +
> >>>  mm/memory-failure.c | 8 ++++----
> >>>  2 files changed, 5 insertions(+), 4 deletions(-)
> >>>
> >>> diff --git a/include/linux/mm.h b/include/linux/mm.h
> >>> index d446e834a3e5..04de0c3e4f9f 100644
> >>> --- a/include/linux/mm.h
> >>> +++ b/include/linux/mm.h
> >>> @@ -3187,6 +3187,7 @@ enum mf_flags {
> >>>  	MF_MUST_KILL = 1 << 2,
> >>>  	MF_SOFT_OFFLINE = 1 << 3,
> >>>  	MF_UNPOISON = 1 << 4,
> >>> +	MF_NO_RETRY = 1 << 5,
> >>>  };
> >>>  extern int memory_failure(unsigned long pfn, int flags);
> >>>  extern void memory_failure_queue(unsigned long pfn, int flags);
> >>> diff --git a/mm/memory-failure.c b/mm/memory-failure.c
> >>> index 6a28d020a4da..e3269b991016 100644
> >>> --- a/mm/memory-failure.c
> >>> +++ b/mm/memory-failure.c
> >>> @@ -1526,7 +1526,8 @@ int __get_huge_page_for_hwpoison(unsigned long pfn, int flags)
> >>>  			count_increased = true;
> >>>  	} else {
> >>>  		ret = -EBUSY;
> >>> -		goto out;
> >>> +		if (!(flags & MF_NO_RETRY))
> >>> +			goto out;
> >>>  	}
> >>
> >> Hi Naoya,
> >>
> >> We are in the else block because !HPageFreed() and !HPageMigratable().
> >> IIUC, this likely means the page is isolated.  One common reason for isolation
> >> is migration.  So, the page could be isolated and on a list for migration.
> >>
> >> I took a quick look at the hugetlb migration code and did not see any checks
> >> for PageHWPoison after a hugetlb page is isolated.  I could have missed
> >> something?  If there are no checks, we will read the PageHWPoison page
> >> in kernel mode while copying to the migration target.
> >>
> >> Is this an issue?  Is is something we need to be concerned with?  Memory
> >> errors can happen at any time, and gracefully handling them is best effort.
> > 
> > It seems HWPoison hugetlb page will still be accessed before this patch. Can we do a
> > get_page_unless_zero first here to ensure that hugetlb page migration should fail due
> > to this extra page reference and thus not access the page content? If hugetlb page is
> > already freezed, corrupted memory will still be consumed though. :(
> 
> Right.  This potential issue was not introduced with this patch.
> Also, I am not sure but it might be an issue with non-hugetlb pages as well.
> 
> As mentioned, memory error handling is a best effort.  Since errors can
> happen at any time, we can not handle all cases.  Or, you could spend the
> rest of your life trying. :)
> 
> The question is, should we worry about errors that happen when a page is
> isolated for migration?

I think yes, but that's not to save current migration event, but to
save us from future memory errors caused by the broken page.

Thanks,
Naoya Horiguchi

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ