lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20201221170551.GB3428478@carbon.DHCP.thefacebook.com>
Date:   Mon, 21 Dec 2020 09:05:51 -0800
From:   Roman Gushchin <guro@...com>
To:     Mike Rapoport <rppt@...nel.org>
CC:     Andrew Morton <akpm@...ux-foundation.org>, <linux-mm@...ck.org>,
        Joonsoo Kim <iamjoonsoo.kim@....com>,
        Rik van Riel <riel@...riel.com>,
        Michal Hocko <mhocko@...nel.org>,
        <linux-kernel@...r.kernel.org>, <kernel-team@...com>
Subject: Re: [PATCH v2 1/2] mm: cma: allocate cma areas bottom-up

On Sun, Dec 20, 2020 at 08:48:48AM +0200, Mike Rapoport wrote:
> On Thu, Dec 17, 2020 at 12:12:13PM -0800, Roman Gushchin wrote:
> > Currently cma areas without a fixed base are allocated close to the
> > end of the node. This placement is sub-optimal because of compaction:
> > it brings pages into the cma area. In particular, it can bring in hot
> > executable pages, even if there is a plenty of free memory on the
> > machine. This results in cma allocation failures.
> > 
> > Instead let's place cma areas close to the beginning of a node.
> > In this case the compaction will help to free cma areas, resulting
> > in better cma allocation success rates.
> > 
> > If there is enough memory let's try to allocate bottom-up starting
> > with 4GB to exclude any possible interference with DMA32. On smaller
> > machines or in a case of a failure, stick with the old behavior.
> > 
> > 16GB vm, 2GB cma area:
> > With this patch:
> > [    0.000000] Command line: root=/dev/vda3 rootflags=subvol=/root systemd.unified_cgroup_hierarchy=1 enforcing=0 console=ttyS0,115200 hugetlb_cma=2G
> > [    0.002928] hugetlb_cma: reserve 2048 MiB, up to 2048 MiB per node
> > [    0.002930] cma: Reserved 2048 MiB at 0x0000000100000000
> > [    0.002931] hugetlb_cma: reserved 2048 MiB on node 0
> > 
> > Without this patch:
> > [    0.000000] Command line: root=/dev/vda3 rootflags=subvol=/root systemd.unified_cgroup_hierarchy=1 enforcing=0 console=ttyS0,115200 hugetlb_cma=2G
> > [    0.002930] hugetlb_cma: reserve 2048 MiB, up to 2048 MiB per node
> > [    0.002933] cma: Reserved 2048 MiB at 0x00000003c0000000
> > [    0.002934] hugetlb_cma: reserved 2048 MiB on node 0
> > 
> > v2:
> >   - switched to memblock_set_bottom_up(true), by Mike
> >   - start with 4GB, by Mike
> > 
> > Signed-off-by: Roman Gushchin <guro@...com>
> 
> With one nit below 
> 
> Reviewed-by: Mike Rapoport <rppt@...ux.ibm.com>
> 
> > ---
> >  mm/cma.c | 16 ++++++++++++++++
> >  1 file changed, 16 insertions(+)
> > 
> > diff --git a/mm/cma.c b/mm/cma.c
> > index 7f415d7cda9f..21fd40c092f0 100644
> > --- a/mm/cma.c
> > +++ b/mm/cma.c
> > @@ -337,6 +337,22 @@ int __init cma_declare_contiguous_nid(phys_addr_t base,
> >  			limit = highmem_start;
> >  		}
> >  
> > +		/*
> > +		 * If there is enough memory, try a bottom-up allocation first.
> > +		 * It will place the new cma area close to the start of the node
> > +		 * and guarantee that the compaction is moving pages out of the
> > +		 * cma area and not into it.
> > +		 * Avoid using first 4GB to not interfere with constrained zones
> > +		 * like DMA/DMA32.
> > +		 */
> > +		if (!memblock_bottom_up() &&
> > +		    memblock_end >= SZ_4G + size) {
>

Hi Mike!

> This seems short enough to fit a single line

Indeed. An updated version below.

Thank you for the review of the series!

I assume it's simpler to route both patches through the mm tree.
What do you think?

Thanks!

--

>From f88bd0a425c7181bd26a4cf900e6924a7b521419 Mon Sep 17 00:00:00 2001
From: Roman Gushchin <guro@...com>
Date: Mon, 14 Dec 2020 20:20:52 -0800
Subject: [PATCH v3 1/2] mm: cma: allocate cma areas bottom-up

Currently cma areas without a fixed base are allocated close to the
end of the node. This placement is sub-optimal because of compaction:
it brings pages into the cma area. In particular, it can bring in hot
executable pages, even if there is a plenty of free memory on the
machine. This results in cma allocation failures.

Instead let's place cma areas close to the beginning of a node.
In this case the compaction will help to free cma areas, resulting
in better cma allocation success rates.

If there is enough memory let's try to allocate bottom-up starting
with 4GB to exclude any possible interference with DMA32. On smaller
machines or in a case of a failure, stick with the old behavior.

16GB vm, 2GB cma area:
With this patch:
[    0.000000] Command line: root=/dev/vda3 rootflags=subvol=/root systemd.unified_cgroup_hierarchy=1 enforcing=0 console=ttyS0,115200 hugetlb_cma=2G
[    0.002928] hugetlb_cma: reserve 2048 MiB, up to 2048 MiB per node
[    0.002930] cma: Reserved 2048 MiB at 0x0000000100000000
[    0.002931] hugetlb_cma: reserved 2048 MiB on node 0

Without this patch:
[    0.000000] Command line: root=/dev/vda3 rootflags=subvol=/root systemd.unified_cgroup_hierarchy=1 enforcing=0 console=ttyS0,115200 hugetlb_cma=2G
[    0.002930] hugetlb_cma: reserve 2048 MiB, up to 2048 MiB per node
[    0.002933] cma: Reserved 2048 MiB at 0x00000003c0000000
[    0.002934] hugetlb_cma: reserved 2048 MiB on node 0

v3:
  - code alignment fix, by Mike
v2:
  - switched to memblock_set_bottom_up(true), by Mike
  - start with 4GB, by Mike

Signed-off-by: Roman Gushchin <guro@...com>
Reviewed-by: Mike Rapoport <rppt@...ux.ibm.com>
---
 mm/cma.c | 15 +++++++++++++++
 1 file changed, 15 insertions(+)

diff --git a/mm/cma.c b/mm/cma.c
index 20c4f6f40037..4fe74c9d83b0 100644
--- a/mm/cma.c
+++ b/mm/cma.c
@@ -336,6 +336,21 @@ int __init cma_declare_contiguous_nid(phys_addr_t base,
 			limit = highmem_start;
 		}
 
+		/*
+		 * If there is enough memory, try a bottom-up allocation first.
+		 * It will place the new cma area close to the start of the node
+		 * and guarantee that the compaction is moving pages out of the
+		 * cma area and not into it.
+		 * Avoid using first 4GB to not interfere with constrained zones
+		 * like DMA/DMA32.
+		 */
+		if (!memblock_bottom_up() && memblock_end >= SZ_4G + size) {
+			memblock_set_bottom_up(true);
+			addr = memblock_alloc_range_nid(size, alignment, SZ_4G,
+							limit, nid, true);
+			memblock_set_bottom_up(false);
+		}
+
 		if (!addr) {
 			addr = memblock_alloc_range_nid(size, alignment, base,
 					limit, nid, true);
-- 
2.26.2

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ