lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YrrZ3hZlqEb3rlM0@FVFYT0MHHV2J>
Date:   Tue, 28 Jun 2022 18:37:18 +0800
From:   Muchun Song <songmuchun@...edance.com>
To:     HORIGUCHI NAOYA(堀口 直也) 
        <naoya.horiguchi@....com>
Cc:     Naoya Horiguchi <nao.horiguchi@...il.com>,
        "linux-mm@...ck.org" <linux-mm@...ck.org>,
        Andrew Morton <akpm@...ux-foundation.org>,
        David Hildenbrand <david@...hat.com>,
        Mike Kravetz <mike.kravetz@...cle.com>,
        Miaohe Lin <linmiaohe@...wei.com>,
        Liu Shixin <liushixin2@...wei.com>,
        Yang Shi <shy828301@...il.com>,
        Oscar Salvador <osalvador@...e.de>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v2 4/9] mm, hwpoison, hugetlb: support saving mechanism
 of raw error pages

On Tue, Jun 28, 2022 at 08:17:55AM +0000, HORIGUCHI NAOYA(堀口 直也) wrote:
> On Tue, Jun 28, 2022 at 02:26:47PM +0800, Muchun Song wrote:
> > On Tue, Jun 28, 2022 at 02:41:22AM +0000, HORIGUCHI NAOYA(堀口 直也) wrote:
> > > On Mon, Jun 27, 2022 at 05:26:01PM +0800, Muchun Song wrote:
> > > > On Fri, Jun 24, 2022 at 08:51:48AM +0900, Naoya Horiguchi wrote:
> > > > > From: Naoya Horiguchi <naoya.horiguchi@....com>
> ...
> > > > > +	} else {
> > > > > +		/*
> > > > > +		 * Failed to save raw error info.  We no longer trace all
> > > > > +		 * hwpoisoned subpages, and we need refuse to free/dissolve
> > > > > +		 * this hwpoisoned hugepage.
> > > > > +		 */
> > > > > +		set_raw_hwp_unreliable(hpage);
> > > > > +		return ret;
> > > > > +	}
> > > > > +	return ret;
> > > > > +}
> > > > > +
> > > > > +inline int hugetlb_clear_page_hwpoison(struct page *hpage)
> > > > > +{
> > > > > +	struct llist_head *head;
> > > > > +	struct llist_node *t, *tnode;
> > > > > +
> > > > > +	if (raw_hwp_unreliable(hpage))
> > > > > +		return -EBUSY;
> > > > 
> > > > IIUC, we use head page's PageHWPoison to synchronize hugetlb_clear_page_hwpoison()
> > > > and hugetlb_set_page_hwpoison(), right? If so, who can set hwp_unreliable here?
> > > 
> > > Sorry if I might miss your point, but raw_hwp_unreliable is set when
> > > allocating raw_hwp_page failed.  hugetlb_set_page_hwpoison() can be called
> > 
> > Sorry. I have missed this. Thanks for your clarification.
> > 
> > > multiple times on a hugepage and if one of the calls fails, the hwpoisoned
> > > hugepage becomes unreliable.
> > > 
> > > BTW, as you pointed out above, if we switch to passing GFP_ATOMIC to kmalloc(),
> > > the kmalloc() never fails, so we no longer have to implement this unreliable
> > 
> > No. kmalloc() with GFP_ATOMIC can fail unless I miss something important.
> 
> OK, I've interpretted the comment about GFP_ATOMIC wrongly.
> 
>  * %GFP_ATOMIC users can not sleep and need the allocation to succeed. A lower
>  * watermark is applied to allow access to "atomic reserves".
>  
> 
> > > flag, so things get simpler.
> > > 
> > > > 
> > > > > +	ClearPageHWPoison(hpage);
> > > > > +	head = raw_hwp_list_head(hpage);
> > > > > +	llist_for_each_safe(tnode, t, head->first) {
> > > > 
> > > > Is it possible that a new item is added hugetlb_set_page_hwpoison()  and we do not
> > > > traverse it (we have cleared page's PageHWPoison)? Then we ignored a real hwpoison
> > > > page, right?
> > > 
> > > Maybe you are mentioning the race like below. Yes, that's possible.
> > >
> > 
> > Sorry, ignore my previous comments, I'm thinking something wrong.
> > 
> > >   CPU 0                            CPU 1
> > > 
> > >                                    free_huge_page
> > >                                      lock hugetlb_lock
> > >                                      ClearHPageMigratable
> > 				       remove_hugetlb_page()
> > 				       // the page is non-HugeTLB now
> 
> Oh, I missed that.
> 
> > >                                      unlock hugetlb_lock
> > >   get_huge_page_for_hwpoison
> > >     lock hugetlb_lock
> > >     __get_huge_page_for_hwpoison
> > 
> > 	// cannot reach here since it is not a HugeTLB page now.
> > 	// So this race is impossible. Then we fallback to normal
> > 	// page handling. Seems there is a new issue here.
> > 	//
> > 	// memory_failure()
> > 	//	try_memory_failure_hugetlb()
> > 	//	if (hugetlb)
> > 	//		goto unlock_mutex;
> > 	//	if (TestSetPageHWPoison(p)) {
> > 	//	// This non-HugeTLB page's vmemmap is still optimized.
> > 	
> > Setting COMPOUND_PAGE_DTOR after hugetlb_vmemmap_restore() might fix this
> > issue, but we will encounter this race as you mentioned below.
> 
> I don't have clear ideas about this now (I don't test vmemmap-optimized case
> yet), so I will think more about this case. Maybe memory_failure() need
> detect it because memory_failure() heaviliy depends on the status of struct
> page.
>

Because HVO (HugeTLB Vmemmap Optimization) will map all tail vmemmap pages
with read-only, we cannot write any data to some tail struct pages. It is
a new issue unrelated to this patch.

Thanks.
 
> Thanks,
> Naoya Horiguchi
> 
> > 
> > Thanks.
> > 	
> > >       hugetlb_set_page_hwpoison
> > >         allocate raw_hwp_page
> > >         TestSetPageHWPoison
> > >                                      update_and_free_page
> > >                                        __update_and_free_page
> > >                                          if (PageHWPoison)
> > >                                            hugetlb_clear_page_hwpoison
> > >                                              TestClearPageHWPoison
> > >                                              // remove all list items
> > >         llist_add
> > >     unlock hugetlb_lock
> > > 
> > > 
> > > The end result seems not critical (leaking raced raw_hwp_page?), but
> > > we need fix.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ