lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Thu, 17 Jun 2021 21:47:46 -0700
From:   Mike Kravetz <mike.kravetz@...cle.com>
To:     David Hildenbrand <david@...hat.com>,
        Naoya Horiguchi <nao.horiguchi@...il.com>, linux-mm@...ck.org
Cc:     Andrew Morton <akpm@...ux-foundation.org>,
        Mel Gorman <mgorman@...hsingularity.net>,
        Oscar Salvador <osalvador@...e.de>,
        Michal Hocko <mhocko@...e.com>,
        Naoya Horiguchi <naoya.horiguchi@....com>,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH mmotm v1] mm/hwpoison: disable pcp for
 page_handle_poison()

On 6/17/21 5:21 AM, David Hildenbrand wrote:
> On 17.06.21 11:26, Naoya Horiguchi wrote:
>> From: Naoya Horiguchi <naoya.horiguchi@....com>
>>
>> Recent changes by patch "mm/page_alloc: allow high-order pages to be
>> stored on the per-cpu lists" makes kernels determine whether to use pcp
>> by pcp_allowed_order(), which breaks soft-offline for hugetlb pages.
>>
>> Soft-offline dissolves a migration source page, then removes it from
>> buddy free list, so it's assumed that any subpage of the soft-offlined
>> hugepage are recognized as a buddy page just after returning from
>> dissolve_free_huge_page().  pcp_allowed_order() returns true for
>> hugetlb, so this assumption is no longer true.
>>
>> So disable pcp during dissolve_free_huge_page() and
>> take_page_off_buddy() to prevent soft-offlined hugepages from linking to
>> pcp lists.  Soft-offline should not be common events so the impact on
>> performance should be minimal.  And I think that the optimization of
>> Mel's patch could benefit to hugetlb so zone_pcp_disable() is called
>> only in hwpoison context.
>>
>> Signed-off-by: Naoya Horiguchi <naoya.horiguchi@....com>
>> ---
>>   mm/memory-failure.c | 19 ++++++++++++++++---
>>   1 file changed, 16 insertions(+), 3 deletions(-)
>>
>> diff --git v5.13-rc6-mmotm-2021-06-15-20-24/mm/memory-failure.c v5.13-rc6-mmotm-2021-06-15-20-24_patched/mm/memory-failure.c
>> index 1842822a10da..593079766655 100644
>> --- v5.13-rc6-mmotm-2021-06-15-20-24/mm/memory-failure.c
>> +++ v5.13-rc6-mmotm-2021-06-15-20-24_patched/mm/memory-failure.c
>> @@ -66,6 +66,19 @@ int sysctl_memory_failure_recovery __read_mostly = 1;
>>     atomic_long_t num_poisoned_pages __read_mostly = ATOMIC_LONG_INIT(0);
>>   +static bool __page_handle_poison(struct page *page)
>> +{
>> +    bool ret;
>> +
>> +    zone_pcp_disable(page_zone(page));
>> +    ret = dissolve_free_huge_page(page);
>> +    if (!ret)
>> +        ret = take_page_off_buddy(page);
>> +    zone_pcp_enable(page_zone(page));
>> +
>> +    return ret;
>> +}
>> +
>>   static bool page_handle_poison(struct page *page, bool hugepage_or_freepage, bool release)
>>   {
>>       if (hugepage_or_freepage) {
>> @@ -73,7 +86,7 @@ static bool page_handle_poison(struct page *page, bool hugepage_or_freepage, boo
>>            * Doing this check for free pages is also fine since dissolve_free_huge_page
>>            * returns 0 for non-hugetlb pages as well.
>>            */
>> -        if (dissolve_free_huge_page(page) || !take_page_off_buddy(page))
>> +        if (!__page_handle_poison(page))
>>               /*
>>                * We could fail to take off the target page from buddy
>>                * for example due to racy page allocation, but that's
>> @@ -986,7 +999,7 @@ static int me_huge_page(struct page *p, unsigned long pfn)
>>            */
>>           if (PageAnon(hpage))
>>               put_page(hpage);
>> -        if (!dissolve_free_huge_page(p) && take_page_off_buddy(p)) {
>> +        if (__page_handle_poison(p)) {
>>               page_ref_inc(p);
>>               res = MF_RECOVERED;
>>           }
>> @@ -1441,7 +1454,7 @@ static int memory_failure_hugetlb(unsigned long pfn, int flags)
>>           res = get_hwpoison_page(p, flags);
>>           if (!res) {
>>               res = MF_FAILED;
>> -            if (!dissolve_free_huge_page(p) && take_page_off_buddy(p)) {
>> +            if (__page_handle_poison(p)) {
>>                   page_ref_inc(p);
>>                   res = MF_RECOVERED;
>>               }
>>
> 
> Just to make sure: all call paths are fine that we are taking a mutex, right?
> 

That should be the case.  dissolve_free_huge_page can sleep, so if not
we are already broken.

-- 
Mike Kravetz

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ