lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Fri, 7 May 2021 04:17:52 +0000
From:   HORIGUCHI NAOYA(堀口 直也) 
        <naoya.horiguchi@....com>
To:     Oscar Salvador <osalvador@...e.de>
CC:     Naoya Horiguchi <nao.horiguchi@...il.com>,
        Mike Kravetz <mike.kravetz@...cle.com>,
        Michal Hocko <mhocko@...e.com>,
        Muchun Song <songmuchun@...edance.com>,
        "akpm@...ux-foundation.org" <akpm@...ux-foundation.org>,
        "linux-mm@...ck.org" <linux-mm@...ck.org>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v2] mm,hwpoison: fix race with compound page allocation

On Thu, May 06, 2021 at 10:51:33AM +0200, Oscar Salvador wrote:
> On Thu, May 06, 2021 at 10:31:22AM +0900, Naoya Horiguchi wrote:
> > From: Naoya Horiguchi <naoya.horiguchi@....com>
> > Date: Thu, 6 May 2021 09:54:39 +0900
> > Subject: [PATCH] mm,hwpoison: fix race with compound page allocation
> > 
> > When hugetlb page fault (under overcommiting situation) and memory_failure()
> > race, VM_BUG_ON_PAGE() is triggered by the following race:
> > 
> >     CPU0:                           CPU1:
> > 
> >                                     gather_surplus_pages()
> >                                       page = alloc_surplus_huge_page()
> >     memory_failure_hugetlb()
> >       get_hwpoison_page(page)
> >         __get_hwpoison_page(page)
> >           get_page_unless_zero(page)
> >                                       zero = put_page_testzero(page)
> >                                       VM_BUG_ON_PAGE(!zero, page)
> >                                       enqueue_huge_page(h, page)
> >       put_page(page)
> > 
> > __get_hwpoison_page() only checks page refcount before taking additional
> > one for memory error handling, which is wrong because there's time
> > windows where compound pages have non-zero refcount during initialization.
> > 
> > So makes __get_hwpoison_page() check more page status for a few types
> > of compound pages. PageSlab() check is added because otherwise
> > "non anonymous thp" path is wrongly chosen.
> > 
> > Fixes: ead07f6a867b ("mm/memory-failure: introduce get_hwpoison_page() for consistent refcount handling")
> > Signed-off-by: Naoya Horiguchi <naoya.horiguchi@....com>
> > Reported-by: Muchun Song <songmuchun@...edance.com>
> > Cc: stable@...r.kernel.org # 5.12+
> 
> Hi Naoya, 
> 
> thanks for the patch.
> I have some concerns though, more below:
> 
> > ---
> >  mm/memory-failure.c | 53 +++++++++++++++++++++++++++------------------
> >  1 file changed, 32 insertions(+), 21 deletions(-)
> > 
> > diff --git a/mm/memory-failure.c b/mm/memory-failure.c
> > index a3659619d293..966a1d6b0bc8 100644
> > --- a/mm/memory-failure.c
> > +++ b/mm/memory-failure.c
> > @@ -1095,30 +1095,41 @@ static int __get_hwpoison_page(struct page *page)
> >  {
> >  	struct page *head = compound_head(page);
> >  
> > -	if (!PageHuge(head) && PageTransHuge(head)) {
> > -		/*
> > -		 * Non anonymous thp exists only in allocation/free time. We
> > -		 * can't handle such a case correctly, so let's give it up.
> > -		 * This should be better than triggering BUG_ON when kernel
> > -		 * tries to touch the "partially handled" page.
> > -		 */
> > -		if (!PageAnon(head)) {
> > -			pr_err("Memory failure: %#lx: non anonymous thp\n",
> > -				page_to_pfn(page));
> > -			return 0;
> > +	if (PageCompound(page)) {
> > +		if (PageSlab(page)) {
> > +			return get_page_unless_zero(page);
> > +		} else if (PageHuge(head)) {
> > +			int ret = 0;
> > +
> > +			spin_lock(&hugetlb_lock);
> > +			if (HPageFreed(head) || HPageMigratable(head))
> > +				ret = get_page_unless_zero(head);
> > +			spin_unlock(&hugetlb_lock);
> > +			return ret;
> 
> Ok, I am probably overthinking this but should we re-check under the
> lock wehther the page is a hugetlb page?
> My concern is, what would happen if:
> 
> CPU0                                          CPU1
>  __get_hwpoison_page                          
>   PageHuge(head) == T                         
>                                               dissolve hugetlb page
>    hugetlb_lock                               
> 
> 
> In that case, by the time we get to check hugetlb flags, those checks
> might return false, and we do not get a refcount.

Thanks, we had better add rechecking as we do in dissolve_free_huge_page().

> So, I guess my question is: Should we re-check under the lock, and if it
> is not, do a "goto try_to_get_ref" that starts right at the beginning,
> or goes directly to the get_page_unless_zero at the end (the former
> probably better)?

Yes, retry could work in this case.  Looking at existing code,
get_any_page() provides "retry" layer, but it's not called now by
get_hwpoison_page() when called from memory_failure().  So I think of trying
to adjust code and make get_hwpoison_page call get_any_page() instead of
calling __get_hwpoison_page(() directly.

Thanks,
Naoya Horiguchi

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ