lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250915195153.462039-1-fvdl@google.com>
Date: Mon, 15 Sep 2025 19:51:41 +0000
From: Frank van der Linden <fvdl@...gle.com>
To: akpm@...ux-foundation.org, muchun.song@...ux.dev, linux-mm@...ck.org, 
	linux-kernel@...r.kernel.org
Cc: hannes@...xchg.org, david@...hat.com, roman.gushchin@...ux.dev, 
	Frank van der Linden <fvdl@...gle.com>
Subject: [RFC PATCH 00/12] CMA balancing

This is an RFC on a solution to the long standing problem of OOMs
occuring when the kernel runs out of space for unmovable allocations
in the face of large amounts of CMA.

Introduction
============

When there is a large amount of CMA (e.g. with hugetlb_cma), it is
possible for the kernel to run out of space to get unmovable
allocations from. This is because it cannot use the CMA area.
If the issue is just that there is a large CMA area, and that
there isn't enough space left, that can be considered a
misconfigured system. However, there is a scenario in which
things could have been dealt with better: if the non-CMA area
also has movable allocations in it, and there are CMA pageblocks
still available.

The current mitigation for this issue is to start using CMA
pageblocks for movable allocations first if the amount of
free CMA pageblocks is more than 50% of the total amount
of free memory in a zone. But that may not always work out,
e.g. the system could easily run in to a scenario where
long-lasting movable allocations are made first, which do
not go to CMA before the 50% mark is reached. When the
non-CMA area fills up, these will get in the way of the
kernel's unmovable allocations, and OOMs might occur.

Even always directing movable allocations to CMA first does
not completely fix the issue. Take a scenario where there
is a large amount of CMA through hugetlb_cma. All of that
CMA has been taken up by 1G hugetlb pages. So, movable allocations
end up in the non-CMA area. Now, the number of hugetlb 
pages in the pool is lowered, so some CMA becomes available.
At the same time, increased system activity leads to more unmovable
allocations. Since the movable allocations are still in the non-CMA
area, these kernel allocations might still fail.


Additionally, CMA areas are allocated at the bottom of the zone.
There has been some discussion on this in the past. Originally,
doing allocations from CMA was deemed something that was best
avoided. The arguments were twofold:

1) cma_alloc needs to be quick and should not have to migrate a
   lot of pages.
2) migration might fail, so the fewer pages it has to migrate
   the better

These arguments are why CMA is avoided (until the 50% limit is hit),
and why CMA areas are allocated at the bottom of a zone. But
compaction migrates memory from the bottom to the top of a zone.
That means that compaction will actually end up migrating movable
allocations out of CMA and in to non-CMA, making the issue of
OOMing for unmovable allocations worse.

Solution: CMA balancing
=======================

First, this patch set makes the 50% threshold configurable, which
is useful in any case. vm.cma_first_limit is the percentage of
free CMA, as part of the total amount of free memory in a zone,
above which CMA will be used first for movable allocations. 0 
is always, 100 is never.

Then, it creates an interface that allows for moving movable
allocations from non-CMA to CMA. CMA areas opt in to taking part
in this through a flag. Also, if the flag is set for a CMA area,
it is allocated at the top of a zone instead of the bottom.

Lastly, the hugetlb_cma code was modified to try to migrate
movable allocations from non-CMA to CMA when a hugetlb CMA
page is freed. Only hugetlb CMA areas opt in to CMA balancing,
behavior for all other CMA areas is unchanged.

Discussion
==========

This approach works when tested with a hugetlb_cma setup
where a large number of 1G pages is active, but the number
is sometimes reduced in exchange for larger non-hugetlb
overhead.

Arguments against this approach:

* It's kind of heavy-handed. Since there is no easy way to
  track the amount of movable allocations residing in non-CMA
  pageblocks, it will likely end up scanning too much memory,
  as it only knows the upper bound.
* It should be more integrated with watermark handling in the
  allocation slow path. Again, this would likely require 
  tracking the number of movable allocations in non-CMA
  pageblocks.

Arguments for this approach:

* Yes, it does more, but the work is restricted to the context
  of a process that decreases the hugetlb pool, and is not
  more work than allocating (e.g. freeing a hugetlb page from
  the pool is now as expensive as allocating a new one).
* hugetlb_cma is really the only situation where you have CMA
  areas large enough to trigger the OOM scenario, so restricting
  it to hugetlb should be good enough.

Comments, thoughts?

Frank van der Linden (12):
  mm/cma: add tunable for CMA fallback limit
  mm/cma: clean up flag handling a bit
  mm/cma: add flags argument to init functions
  mm/cma: keep a global sorted list of CMA ranges
  mm/cma: add helper functions for CMA balancing
  mm/cma: define and act on CMA_BALANCE flag
  mm/compaction: optionally use a different isolate function
  mm/compaction: simplify isolation order checks a bit
  mm/cma: introduce CMA balancing
  mm/hugetlb: do explicit CMA balancing
  mm/cma: rebalance CMA when changing cma_first_limit
  mm/cma: add CMA balance VM event counter

 arch/powerpc/kernel/fadump.c         |   2 +-
 arch/powerpc/kvm/book3s_hv_builtin.c |   2 +-
 drivers/s390/char/vmcp.c             |   2 +-
 include/linux/cma.h                  |  64 +++++-
 include/linux/migrate_mode.h         |   1 +
 include/linux/mm.h                   |   4 +
 include/linux/vm_event_item.h        |   3 +
 include/trace/events/migrate.h       |   3 +-
 kernel/dma/contiguous.c              |  10 +-
 mm/cma.c                             | 318 +++++++++++++++++++++++----
 mm/cma.h                             |  13 +-
 mm/compaction.c                      | 199 +++++++++++++++--
 mm/hugetlb.c                         |  14 +-
 mm/hugetlb_cma.c                     |  18 +-
 mm/hugetlb_cma.h                     |   5 +
 mm/internal.h                        |  11 +-
 mm/migrate.c                         |   8 +
 mm/page_alloc.c                      | 104 +++++++--
 mm/vmstat.c                          |   2 +
 19 files changed, 676 insertions(+), 107 deletions(-)

-- 
2.51.0.384.g4c02a37b29-goog


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ