[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <39674952-44fc-8386-39b7-9e0862aaa991@oracle.com>
Date: Wed, 23 Jun 2021 17:26:27 -0700
From: Mike Kravetz <mike.kravetz@...cle.com>
To: Muchun Song <songmuchun@...edance.com>,
Naoya Horiguchi <nao.horiguchi@...il.com>
Cc: Linux Memory Management List <linux-mm@...ck.org>,
LKML <linux-kernel@...r.kernel.org>,
Jann Horn <jannh@...gle.com>,
Youquan Song <youquan.song@...el.com>,
Andrea Arcangeli <aarcange@...hat.com>,
Jan Kara <jack@...e.cz>, John Hubbard <jhubbard@...dia.com>,
"Kirill A . Shutemov" <kirill@...temov.name>,
Matthew Wilcox <willy@...radead.org>,
Michal Hocko <mhocko@...nel.org>,
Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: [External] [PATCH 2/2] hugetlb: address ref count racing in
prep_compound_gigantic_page
Cc: Naoya
On 6/23/21 1:00 AM, Muchun Song wrote:
> On Tue, Jun 22, 2021 at 10:15 AM Mike Kravetz <mike.kravetz@...cle.com> wrote:
>>
>> In [1], Jann Horn points out a possible race between
>> prep_compound_gigantic_page and __page_cache_add_speculative. The
>> root cause of the possible race is prep_compound_gigantic_page
>> uncondittionally setting the ref count of pages to zero. It does this
>> because prep_compound_gigantic_page is handed a 'group' of pages from an
>> allocator and needs to convert that group of pages to a compound page.
>> The ref count of each page in this 'group' is one as set by the
>> allocator. However, the ref count of compound page tail pages must be
>> zero.
>>
>> The potential race comes about when ref counted pages are returned from
>> the allocator. When this happens, other mm code could also take a
>> reference on the page. __page_cache_add_speculative is one such
>> example. Therefore, prep_compound_gigantic_page can not just set the
>> ref count of pages to zero as it does today. Doing so would lose the
>> reference taken by any other code. This would lead to BUGs in code
>> checking ref counts and could possibly even lead to memory corruption.
>
> Hi Mike,
>
> Well. It takes me some time to get the race. It also makes me think more
> about this. See the below code snippet in gather_surplus_pages().
>
> zeroed = put_page_testzero(page);
> VM_BUG_ON_PAGE(!zeroed, page);
> enqueue_huge_page(h, page);
>
> The VM_BUG_ON_PAGE() can be triggered because of the similar
> race, right? IIUC, we also should fix this.
Thanks for taking a look at this Muchun.
I believe you are correct. Page allocators (even buddy) will hand back
a ref counted head page. Any other code 'could' take a reference on the
head page before the pages are made into a hugetlb page. Once the pages
becomes a hugetlb page (PageHuge() true), then only hugetlb specific
code should be modifying the ref count. So, it seems the 'race window'
is from the time the pages are returned from a low level allocator until
the time the pages become a hugetlb page. Does that sound correct?
If we want to check for and handle such a race, we would need to do so
in prep_new_huge_page. After setting the descructor we would need to
check for an increased ref count (> 1). Not sure if we would need a
memory barrier or some other type synchronization for this? This of
course means that prep_new_huge_page could return an error, and we would
need to deal with that in all callers.
I went back and looked at those lines in gather_surplus_pages
zeroed = put_page_testzero(page);
VM_BUG_ON_PAGE(!zeroed, page);
enqueue_huge_page(h, page);
They were first added as part of alloc_buddy_huge_page with commit
2668db9111bb - hugetlb: correct page count for surplus huge pages.
It appears the reason for the VM_BUG_ON is because prior hugetlb code
forgot to account for the ref count provided by the buddy allocator.
The VM_BUG_ON may have been added mostly as a sanity check for hugetlb
ref count management.
I wonder if we have ever hit that VM_BUG_ON in the 13 years it has been
in the code? I know you recently spotted the potential race with memory
error handling and Naoya fixed up the memory error code.
I'm OK with modifying prep_new_huge_page, but it is going to be a bit
messy (like this patch). I wonder if there are other less intrusive
ways to address this potential issue?
--
Mike Kravetz
Powered by blists - more mailing lists