lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CAMZfGtXQg__eDh39g0aXhvtD-91PkXJGeNFY_cJgC8X8q-11tA@mail.gmail.com>
Date:   Thu, 24 Jun 2021 11:38:08 +0800
From:   Muchun Song <songmuchun@...edance.com>
To:     Mike Kravetz <mike.kravetz@...cle.com>
Cc:     Naoya Horiguchi <nao.horiguchi@...il.com>,
        Linux Memory Management List <linux-mm@...ck.org>,
        LKML <linux-kernel@...r.kernel.org>,
        Jann Horn <jannh@...gle.com>,
        Youquan Song <youquan.song@...el.com>,
        Andrea Arcangeli <aarcange@...hat.com>,
        Jan Kara <jack@...e.cz>, John Hubbard <jhubbard@...dia.com>,
        "Kirill A . Shutemov" <kirill@...temov.name>,
        Matthew Wilcox <willy@...radead.org>,
        Michal Hocko <mhocko@...nel.org>,
        Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: [External] [PATCH 2/2] hugetlb: address ref count racing in prep_compound_gigantic_page

On Thu, Jun 24, 2021 at 8:26 AM Mike Kravetz <mike.kravetz@...cle.com> wrote:
>
> Cc: Naoya
>
> On 6/23/21 1:00 AM, Muchun Song wrote:
> > On Tue, Jun 22, 2021 at 10:15 AM Mike Kravetz <mike.kravetz@...cle.com> wrote:
> >>
> >> In [1], Jann Horn points out a possible race between
> >> prep_compound_gigantic_page and __page_cache_add_speculative.  The
> >> root cause of the possible race is prep_compound_gigantic_page
> >> uncondittionally setting the ref count of pages to zero.  It does this
> >> because prep_compound_gigantic_page is handed a 'group' of pages from an
> >> allocator and needs to convert that group of pages to a compound page.
> >> The ref count of each page in this 'group' is one as set by the
> >> allocator.  However, the ref count of compound page tail pages must be
> >> zero.
> >>
> >> The potential race comes about when ref counted pages are returned from
> >> the allocator.  When this happens, other mm code could also take a
> >> reference on the page.  __page_cache_add_speculative is one such
> >> example.  Therefore, prep_compound_gigantic_page can not just set the
> >> ref count of pages to zero as it does today.  Doing so would lose the
> >> reference taken by any other code.  This would lead to BUGs in code
> >> checking ref counts and could possibly even lead to memory corruption.
> >
> > Hi Mike,
> >
> > Well. It takes me some time to get the race. It also makes me think more
> > about this. See the below code snippet in gather_surplus_pages().
> >
> >         zeroed = put_page_testzero(page);
> >        VM_BUG_ON_PAGE(!zeroed, page);
> >         enqueue_huge_page(h, page);
> >
> > The VM_BUG_ON_PAGE() can be triggered because of the similar
> > race, right? IIUC, we also should fix this.
>
> Thanks for taking a look at this Muchun.
>
> I believe you are correct.  Page allocators (even buddy) will hand back
> a ref counted head page.  Any other code 'could' take a reference on the
> head page before the pages are made into a hugetlb page.  Once the pages
> becomes a hugetlb page (PageHuge() true), then only hugetlb specific
> code should be modifying the ref count.  So, it seems the 'race window'
> is from the time the pages are returned from a low level allocator until
> the time the pages become a hugetlb page.  Does that sound correct?

I have a question about this, why should the ref count of the hugetlb page
be managed by the hugetlb specific code? What if we broke this rule? If so,
we can fix this issue easily.

CPU0:                                   CPU1:
page = xas_load()
// the page is freed to the buddy
                                        page = alloc_surplus_huge_page()
                                        if (put_page_testzero(page))
                                                enqueue_huge_page(h, page)
page_cache_get_speculative(page)
if (unlikely(page != xas_reload(&xas)))
        // this can help us free the hugetlb page to pool
        put_page(page)

If someone gets the page successfully before we put the page in the
gather_surplus_pages, then it will help us add the hugetlb page
to the pool when it calls put_page().

>
> If we want to check for and handle such a race, we would need to do so
> in prep_new_huge_page.  After setting the descructor we would need to
> check for an increased ref count (> 1).  Not sure if we would need a
> memory barrier or some other type synchronization for this?  This of
> course means that prep_new_huge_page could return an error, and we would
> need to deal with that in all callers.

As described above, IIUC, we do not need to change the behavior
of prep_new_huge_page. We just need to change gather_surplus_pages
like below. Just my thoughts about this, maybe I am wrong.

diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index c3b2a8a494d6..8a1a75534543 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -2160,17 +2160,11 @@ static int gather_surplus_pages(struct hstate
*h, long delta)

        /* Free the needed pages to the hugetlb pool */
        list_for_each_entry_safe(page, tmp, &surplus_list, lru) {
-               int zeroed;
-
                if ((--needed) < 0)
                        break;
-               /*
-                * This page is now managed by the hugetlb allocator and has
-                * no users -- drop the buddy allocator's reference.
-                */
-               zeroed = put_page_testzero(page);
-               VM_BUG_ON_PAGE(!zeroed, page);
-               enqueue_huge_page(h, page);
+
+               if (put_page_testzero(page))
+                       enqueue_huge_page(h, page);
        }
 free:
        spin_unlock_irq(&hugetlb_lock);


>
> I went back and looked at those lines in gather_surplus_pages
>
>                 zeroed = put_page_testzero(page);
>                 VM_BUG_ON_PAGE(!zeroed, page);
>                 enqueue_huge_page(h, page);
>
> They were first added as part of alloc_buddy_huge_page with commit
> 2668db9111bb - hugetlb: correct page count for surplus huge pages.
> It appears the reason for the VM_BUG_ON is because prior hugetlb code
> forgot to account for the ref count provided by the buddy allocator.
> The VM_BUG_ON may have been added mostly as a sanity check for hugetlb
> ref count management.
>
> I wonder if we have ever hit that VM_BUG_ON in the 13 years it has been
> in the code?  I know you recently spotted the potential race with memory
> error handling and Naoya fixed up the memory error code.
>
> I'm OK with modifying prep_new_huge_page, but it is going to be a bit
> messy (like this patch).  I wonder if there are other less intrusive
> ways to address this potential issue?
> --
> Mike Kravetz

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ