lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Mon, 1 Feb 2021 16:05:05 -0800
From:   Mike Kravetz <>
To:     David Hildenbrand <>,
        Muchun Song <>,
        Oscar Salvador <>
Cc:     Jonathan Corbet <>,
        Thomas Gleixner <>,,,,,,,
        Peter Zijlstra <>,,
        Andrew Morton <>,,,,
        Randy Dunlap <>,,,,
        Mina Almasry <>,
        David Rientjes <>,
        Matthew Wilcox <>,
        Michal Hocko <>,
        "Song Bao Hua (Barry Song)" <>,
        HORIGUCHI NAOYA(堀口 直也) 
        Xiongchun duan <>,, LKML <>,
        Linux Memory Management List <>,
        linux-fsdevel <>
Subject: Re: [External] Re: [PATCH v13 05/12] mm: hugetlb: allocate the
 vmemmap pages associated with each HugeTLB page

On 2/1/21 8:10 AM, David Hildenbrand wrote:
>>> What's your opinion about this? Should we take this approach?
>> I think trying to solve all the issues that could happen as the result of
>> not being able to dissolve a hugetlb page has made this extremely complex.
>> I know this is something we need to address/solve.  We do not want to add
>> more unexpected behavior in corner cases.  However, I can not help but think
>> about similar issues today.  For example, if a huge page is in use in
>> ZONE_MOVABLE or CMA there is no guarantee that it can be migrated today.
> Yes, hugetlbfs is broken with alloc_contig_range() as e.g., used by CMA and needs fixing. Then, similar problems as with hugetlbfs pages on ZONE_MOVABLE apply.
> hugetlbfs pages on ZONE_MOVABLE for memory unplug are problematic in corner cases only I think:
> 1. Not sufficient memory to allocate a destination page. Well, nothing we can really do about that - just like trying to migrate any other memory but running into -ENOMEM.
> 2. Trying to dissolve a free huge page but running into reservation limits. I think we should at least try allocating a new free huge page before failing. To be tackled in the future.
>> Correct?  We may need to allocate another huge page for the target of the
>> migration, and there is no guarantee we can do that.
> I agree that 1. is similar to "cannot migrate because OOM".
> So thinking about it again, we don't actually seem to lose that much when
> a) Rejecting migration of a huge page when not being able to allocate the vmemmap for our source page. Our system seems to be under quite some memory pressure already. Migration could just fail because we fail to allocate a migration target already.
> b) Rejecting to dissolve a huge page when not able to allocate the vmemmap. Dissolving can fail already. And, again, our system seems to be under quite some memory pressure already.
> c) Rejecting freeing huge pages when not able to allocate the vmemmap. I guess the "only" surprise is that the user might now no longer get what he asked for. This seems to be the "real change".
> So maybe little actually speaks against allowing for migration of such huge pages and optimizing any huge page, besides rejecting freeing of huge pages and surprising the user/admin.
> I guess while our system is under memory pressure CMA and ZONE_MOVABLE are already no longer able to always keep their guarantees - until there is no more memory pressure.

My thinking was similar.  Failing to dissolve a hugetlb page because we could
not allocate vmmemmap pages is not much/any worse than what we do when near
OOM conditions today.  As for surprising the user/admin, we should certainly
log a warning if we can not dissolve a hugetlb page.

One point David R brought up still is a bit concerning.  When getting close
to OOM, there may be users/code that will try to dissolve free hugetlb pages
to give back as much memory as possible to buddy.  I've seen users holding
'big chunks' of memory for a specific purpose and dumping them when needed.
They were not doing this with hugetlb pages, but nothing would surprise me.

In this series, vmmap freeing is 'opt in' at boot time.  I would expect
the use cases that want to opt in rarely if ever free/dissolve hugetlb
pages.  But, I could be wrong.
Mike Kravetz

Powered by blists - more mailing lists