lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 12 Jan 2021 15:28:25 +0100
From:   David Hildenbrand <david@...hat.com>
To:     Muchun Song <songmuchun@...edance.com>
Cc:     Mike Kravetz <mike.kravetz@...cle.com>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Naoya Horiguchi <n-horiguchi@...jp.nec.com>,
        Andi Kleen <ak@...ux.intel.com>, mhocko@...e.cz,
        Linux Memory Management List <linux-mm@...ck.org>,
        LKML <linux-kernel@...r.kernel.org>,
        Yang Shi <shy828301@...il.com>
Subject: Re: [External] Re: [PATCH v3 1/6] mm: migrate: do not migrate HugeTLB
 page whose refcount is one

On 12.01.21 15:17, Muchun Song wrote:
> On Tue, Jan 12, 2021 at 9:51 PM David Hildenbrand <david@...hat.com> wrote:
>>
>> On 12.01.21 14:40, Muchun Song wrote:
>>> On Tue, Jan 12, 2021 at 7:11 PM David Hildenbrand <david@...hat.com> wrote:
>>>>
>>>> On 12.01.21 12:00, David Hildenbrand wrote:
>>>>> On 10.01.21 13:40, Muchun Song wrote:
>>>>>> If the refcount is one when it is migrated, it means that the page
>>>>>> was freed from under us. So we are done and do not need to migrate.
>>>>>>
>>>>>> This optimization is consistent with the regular pages, just like
>>>>>> unmap_and_move() does.
>>>>>>
>>>>>> Signed-off-by: Muchun Song <songmuchun@...edance.com>
>>>>>> Reviewed-by: Mike Kravetz <mike.kravetz@...cle.com>
>>>>>> Acked-by: Yang Shi <shy828301@...il.com>
>>>>>> ---
>>>>>>  mm/migrate.c | 6 ++++++
>>>>>>  1 file changed, 6 insertions(+)
>>>>>>
>>>>>> diff --git a/mm/migrate.c b/mm/migrate.c
>>>>>> index 4385f2fb5d18..a6631c4eb6a6 100644
>>>>>> --- a/mm/migrate.c
>>>>>> +++ b/mm/migrate.c
>>>>>> @@ -1279,6 +1279,12 @@ static int unmap_and_move_huge_page(new_page_t get_new_page,
>>>>>>              return -ENOSYS;
>>>>>>      }
>>>>>>
>>>>>> +    if (page_count(hpage) == 1) {
>>>>>> +            /* page was freed from under us. So we are done. */
>>>>>> +            putback_active_hugepage(hpage);
>>>>>> +            return MIGRATEPAGE_SUCCESS;
>>>>>> +    }
>>>>>> +
>>>>>>      new_hpage = get_new_page(hpage, private);
>>>>>>      if (!new_hpage)
>>>>>>              return -ENOMEM;
>>>>>>
>>>>>
>>>>> Question: What if called via alloc_contig_range() where we even want to
>>>>> "migrate" free pages, meaning, relocate it?
>>>>>
>>>>
>>>> To be more precise:
>>>>
>>>> a) We don't have dissolve_free_huge_pages() calls on the
>>>> alloc_contig_range() path. So we *need* migration IIUC.
>>>
>>> Without this patch, if you want to migrate a HUgeTLB page,
>>> the page is freed to the hugepage pool. With this patch,
>>> the page is also freed to the hugepage pool.
>>> I didn't see any different. I am missing something?
>>
>> I am definitely not an expert on hugetlb pools, that's why I am asking.
>>
>> Isn't it, that with your code, no new page is allocated - so
>> dissolve_free_huge_pages() might just refuse to dissolve due to
>> reservations, bailing out, no?
> 
> Without this patch, the new page can be allocated from the
> hugepage pool. The dissolve_free_huge_pages() also
> can refuse to dissolve due to reservations. Right?

Oh, you mean the migration target might be coming from the pool? I guess
yes, looking at alloc_migration_target()->alloc_huge_page_nodemask().

In that case, yes, I think we run into a similar issue already.

Instead of trying to allocate new huge pages in
dissolve_free_huge_pages() to "relocate free pages", we bail out.

This all feels kind of wrong. After we migrated a huge page we should
free it back to the buddy, so most of our machinery just keeps working
without caring about free huge pages.


I can see how your patch will not change the current (IMHO broken) behavior.

-- 
Thanks,

David / dhildenb

Powered by blists - more mailing lists