lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <7ia4qzw3z9y1.fsf@castle.c.googlers.com>
Date: Thu, 18 Sep 2025 22:12:54 +0000
From: Roman Gushchin <roman.gushchin@...ux.dev>
To: Frank van der Linden <fvdl@...gle.com>
Cc: akpm@...ux-foundation.org,  muchun.song@...ux.dev,  linux-mm@...ck.org,
  linux-kernel@...r.kernel.org,  hannes@...xchg.org,  david@...hat.com
Subject: Re: [RFC PATCH 00/12] CMA balancing

Frank van der Linden <fvdl@...gle.com> writes:

> On Tue, Sep 16, 2025 at 5:51 PM Roman Gushchin <roman.gushchin@...ux.dev> wrote:
>>
>> Frank van der Linden <fvdl@...gle.com> writes:
>>
>> > This is an RFC on a solution to the long standing problem of OOMs
>> > occuring when the kernel runs out of space for unmovable allocations
>> > in the face of large amounts of CMA.
>> >
>> > Introduction
>> > ============
>> >
>> > When there is a large amount of CMA (e.g. with hugetlb_cma), it is
>> > possible for the kernel to run out of space to get unmovable
>> > allocations from. This is because it cannot use the CMA area.
>> > If the issue is just that there is a large CMA area, and that
>> > there isn't enough space left, that can be considered a
>> > misconfigured system. However, there is a scenario in which
>> > things could have been dealt with better: if the non-CMA area
>> > also has movable allocations in it, and there are CMA pageblocks
>> > still available.
>> >
>> > The current mitigation for this issue is to start using CMA
>> > pageblocks for movable allocations first if the amount of
>> > free CMA pageblocks is more than 50% of the total amount
>> > of free memory in a zone. But that may not always work out,
>> > e.g. the system could easily run in to a scenario where
>> > long-lasting movable allocations are made first, which do
>> > not go to CMA before the 50% mark is reached. When the
>> > non-CMA area fills up, these will get in the way of the
>> > kernel's unmovable allocations, and OOMs might occur.
>> >
>> > Even always directing movable allocations to CMA first does
>> > not completely fix the issue. Take a scenario where there
>> > is a large amount of CMA through hugetlb_cma. All of that
>> > CMA has been taken up by 1G hugetlb pages. So, movable allocations
>> > end up in the non-CMA area. Now, the number of hugetlb
>> > pages in the pool is lowered, so some CMA becomes available.
>> > At the same time, increased system activity leads to more unmovable
>> > allocations. Since the movable allocations are still in the non-CMA
>> > area, these kernel allocations might still fail.
>> >
>> >
>> > Additionally, CMA areas are allocated at the bottom of the zone.
>> > There has been some discussion on this in the past. Originally,
>> > doing allocations from CMA was deemed something that was best
>> > avoided. The arguments were twofold:
>> >
>> > 1) cma_alloc needs to be quick and should not have to migrate a
>> >    lot of pages.
>> > 2) migration might fail, so the fewer pages it has to migrate
>> >    the better
>> >
>> > These arguments are why CMA is avoided (until the 50% limit is hit),
>> > and why CMA areas are allocated at the bottom of a zone. But
>> > compaction migrates memory from the bottom to the top of a zone.
>> > That means that compaction will actually end up migrating movable
>> > allocations out of CMA and in to non-CMA, making the issue of
>> > OOMing for unmovable allocations worse.
>> >
>> > Solution: CMA balancing
>> > =======================
>> >
>> > First, this patch set makes the 50% threshold configurable, which
>> > is useful in any case. vm.cma_first_limit is the percentage of
>> > free CMA, as part of the total amount of free memory in a zone,
>> > above which CMA will be used first for movable allocations. 0
>> > is always, 100 is never.
>> >
>> > Then, it creates an interface that allows for moving movable
>> > allocations from non-CMA to CMA. CMA areas opt in to taking part
>> > in this through a flag. Also, if the flag is set for a CMA area,
>> > it is allocated at the top of a zone instead of the bottom.
>>
>> Hm, what if we can teach the compaction code to start off the
>> beginning of the zone or end of cma zone(s) depending on the
>> current balance?
>>
>> The problem with placing the cma area at the end is that it might
>> significantly decrease the success rate of cma allocations
>> when it's racing with the background compaction, which is hard
>> to control. At least it was clearly so in my measurements several
>> years ago.
>
> Indeed, I saw your change that moved the CMA areas to the bottom of
> the zone for that reason. In my testing, I saw a slight uptick in
> cma_alloc failures for HugeTLB (due to migration failures), but it
> wasn't much at all. Also, our current usage scenario can deal with the
> occasional failure, so it was less of a concern. I can try to re-run
> some tests to see if I can gather some harder numbers on that - the
> problem is of course finding a test case that gives reproducible
> results.

It might heavily depend on which file system you're using.

>>
>>
>> > Lastly, the hugetlb_cma code was modified to try to migrate
>> > movable allocations from non-CMA to CMA when a hugetlb CMA
>> > page is freed. Only hugetlb CMA areas opt in to CMA balancing,
>> > behavior for all other CMA areas is unchanged.
>> >
>> > Discussion
>> > ==========
>> >
>> > This approach works when tested with a hugetlb_cma setup
>> > where a large number of 1G pages is active, but the number
>> > is sometimes reduced in exchange for larger non-hugetlb
>> > overhead.
>> >
>> > Arguments against this approach:
>> >
>> > * It's kind of heavy-handed. Since there is no easy way to
>> >   track the amount of movable allocations residing in non-CMA
>> >   pageblocks, it will likely end up scanning too much memory,
>> >   as it only knows the upper bound.
>> > * It should be more integrated with watermark handling in the
>> >   allocation slow path. Again, this would likely require
>> >   tracking the number of movable allocations in non-CMA
>> >   pageblocks.
>>
>> I think the problem is very real and the proposed approach looks
>> reasonable. But I also agree that it's heavy-handed. Doesn't feel
>> like "the final" solution :)
>>
>> I wonder if we can track the amount of free space outside of cma
>> and move pages out on reaching a certain low threshold?
>> And it can in theory be the part of the generic kswapd/reclaim code.
>
> I considered this, yes. The first problem is that there is no easy way
> to express the number that is "pages allocated with __GFP_MOVABLE in
> non-CMA pageblocks".  You can approximate pretty well by checking if
> they are on the LRU, I suppose.

Hm, but why do you need it? Even if there are movable pages outside of
cma, but there are no or little free pages, we can start (trying) moving
movable pages into cma, right?

>
> If you succeed in getting that number accurately, the next issue is
> defining the right threshold and when to apply them. E.g. at one point
> I had a change to skip CMA pageblocks for compaction if the target
> pageblock is non-CMA, and the threshold has been hit. I ended up
> dropping it, since this more special-case approach was better for our
> use case. But my idea at the time was to add it as a 3rd mechanism to
> try harder for allocations (compaction, reclaim, CMA balancing).

Agree, but I guess we can pick some magic number.

>
> It was something like:
>
> 1) Track movable allocations in non-CMA areas.
> 2) If the watermark for an unmovable allocation is below high, stop
> migrating things (through compaction) from CMA to non-CMA, and always
> start allocating from CMA first.
> 3) If the watermark is approaching low, don't try compaction if you
> know that CMA can be balanced, but do CMA balancing instead, in
> amounts that satisfy your needs
>
> One problem here is ping-ponging of memory. If you put CMA areas at
> the bottom of the zone, ompaction moves things one way, CMA balancing
> the other way.

Agree, it's a valid concern.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ