lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <f9c85756-62e1-3d5c-9fbc-f38c6e8f07f3@oracle.com>
Date:   Thu, 13 May 2021 21:02:15 -0700
From:   Mike Kravetz <mike.kravetz@...cle.com>
To:     Mina Almasry <almasrymina@...gle.com>
Cc:     Axel Rasmussen <axelrasmussen@...gle.com>,
        Peter Xu <peterx@...hat.com>, Linux-MM <linux-mm@...ck.org>,
        Andrew Morton <akpm@...ux-foundation.org>,
        open list <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] mm, hugetlb: fix resv_huge_pages underflow on UFFDIO_COPY

On 5/13/21 5:23 PM, Mina Almasry wrote:
> On Thu, May 13, 2021 at 5:14 PM Mike Kravetz <mike.kravetz@...cle.com> wrote:
>>
>> On 5/13/21 4:49 PM, Mina Almasry wrote:
>>> On Thu, May 13, 2021 at 4:43 PM Mina Almasry <almasrymina@...gle.com> wrote:
>>>>
>>>> When hugetlb_mcopy_atomic_pte() is called with:
>>>> - mode==MCOPY_ATOMIC_NORMAL and,
>>>> - we already have a page in the page cache corresponding to the
>>>> associated address,
>>>>
>>>> We will allocate a huge page from the reserves, and then fail to insert it
>>>> into the cache and return -EEXIST. In this case, we need to return -EEXIST
>>>> without allocating a new page as the page already exists in the cache.
>>>> Allocating the extra page causes the resv_huge_pages to underflow temporarily
>>>> until the extra page is freed.
>>>>
>>>> To fix this we check if a page exists in the cache, and allocate it and
>>>> insert it in the cache immediately while holding the lock. After that we
>>>> copy the contents into the page.
>>>>
>>>> As a side effect of this, pages may exist in the cache for which the
>>>> copy failed and for these pages PageUptodate(page) == false. Modify code
>>>> that query the cache to handle this correctly.
>>>>
>>>
>>> To be honest, I'm not sure I've done this bit correctly. Please take a
>>> look and let me know what you think. It may be too overly complicated
>>> to have !PageUptodate() pages in the cache and ask the rest of the
>>> code to handle that edge case correctly, but I'm not sure how else to
>>> fix this issue.
>>>
>>
>> I think you just moved the underflow from hugetlb_mcopy_atomic_pte to
>> hugetlb_no_page.  Why?
>>
>> Consider the case where there is only one reserve left and someone does
>> the MCOPY_ATOMIC_NORMAL for the address.  We will allocate the page and
>> consume the reserve (reserve count == 0) and insert the page into the
>> cache.  Now, if the copy_huge_page_from_user fails we must drop the
>> locks/fault mutex to do the copy.  While locks are dropped, someone
>> faults on the address and ends up in hugetlb_no_page.  The page is in
>> the cache but not up to date, so we go down the allocate new page path
>> and will decrement the reserve count again to cause underflow.
>>
> 
> For what it's worth, I think I fixed the underflow with this patch,
> not moved it. I added a check in hugetlb_no_page() such that if we
> find a page in the cache with !PageUptodate(page), we will reuse that
> page instead of allocating a new one and decrementing the count again.
> Running the test with the WARN_ONCE_ON locally shows no warnings
> again.

My bad, sorry I missed that.

I am also concerned with the semantics of this approach and what happens
when a fault races with the userfaultfd copy.  Previously I asked Peter
if we could/should use a page found in the cache for the copy.  His
answer was as follows:

 AFAICT that's the expected behavior, and it need to be like that so as to avoid
 silent data corruption (if the page cache existed, it means the page is not
 "missing" at all, then it does not suite for a UFFDIO_COPY as it's only used
 for uffd page missing case).

The interesting thing in your approach is that there was no page in the
cache before the start of the userfaultfd copy.  Yet, because we drop
the locks we could race with a fault that would normally add a page to
the cache.  Unless I am reading your patch incorrectly again, the
!PageUptodate(page) path in hugetlb_no_page is going to clear the page
in the cache.  This clearing would race with the copy to page done by
the userfaultfd code.

> 
>> How about this approach?
> 
> I'll give it a shot for sure. FWIW on first glance it looks more
> complicated that what I have here, but my guess I'm not doing the
> !PageUptodate() handling correctly and that's why it seems this
> solution is simpler. I'll give it a shot though.

Ok, I will look into this more as well.  Unfortunately, there does not
appear to be a simple elegant solution.
-- 
Mike Kravetz

> 
>> - Keep the check for hugetlbfs_pagecache_present in hugetlb_mcopy_atomic_pte
>>   that you added.  That will catch the race where the page was added to
>>   the cache before entering the routine.
>> - With the above check in place, we only need to worry about the case
>>   where copy_huge_page_from_user fails and we must drop locks.  In this
>>   case we:
>>   - Free the page previously allocated.
>>   - Allocate a 'temporary' huge page without consuming reserves.  I'm
>>     thinking of something similar to page migration.
>>   - Drop the locks and let the copy_huge_page_from_user be done to the
>>     temporary page.
>>   - When reentering hugetlb_mcopy_atomic_pte after dropping locks (the
>>     *pagep case) we need to once again check
>>     hugetlbfs_pagecache_present.
>>   - We then try to allocate the huge page which will consume the
>>     reserve.  If successful, copy contents of temporary page to newly
>>     allocated page.  Free temporary page.
>>
>> There may be issues with this, and I have not given it deep thought.  It
>> does abuse the temporary huge page concept, but perhaps no more than
>> page migration.  Things do slow down if the extra page allocation and
>> copy is required, but that would only be the case if copy_huge_page_from_user
>> needs to be done without locks.  Not sure, but hoping that is rare.
>> --
>> Mike Kravetz

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ