[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20201220064848.GA392325@kernel.org>
Date: Sun, 20 Dec 2020 08:48:48 +0200
From: Mike Rapoport <rppt@...nel.org>
To: Roman Gushchin <guro@...com>
Cc: Andrew Morton <akpm@...ux-foundation.org>, linux-mm@...ck.org,
Joonsoo Kim <iamjoonsoo.kim@....com>,
Rik van Riel <riel@...riel.com>,
Michal Hocko <mhocko@...nel.org>, linux-kernel@...r.kernel.org,
kernel-team@...com
Subject: Re: [PATCH v2 1/2] mm: cma: allocate cma areas bottom-up
On Thu, Dec 17, 2020 at 12:12:13PM -0800, Roman Gushchin wrote:
> Currently cma areas without a fixed base are allocated close to the
> end of the node. This placement is sub-optimal because of compaction:
> it brings pages into the cma area. In particular, it can bring in hot
> executable pages, even if there is a plenty of free memory on the
> machine. This results in cma allocation failures.
>
> Instead let's place cma areas close to the beginning of a node.
> In this case the compaction will help to free cma areas, resulting
> in better cma allocation success rates.
>
> If there is enough memory let's try to allocate bottom-up starting
> with 4GB to exclude any possible interference with DMA32. On smaller
> machines or in a case of a failure, stick with the old behavior.
>
> 16GB vm, 2GB cma area:
> With this patch:
> [ 0.000000] Command line: root=/dev/vda3 rootflags=subvol=/root systemd.unified_cgroup_hierarchy=1 enforcing=0 console=ttyS0,115200 hugetlb_cma=2G
> [ 0.002928] hugetlb_cma: reserve 2048 MiB, up to 2048 MiB per node
> [ 0.002930] cma: Reserved 2048 MiB at 0x0000000100000000
> [ 0.002931] hugetlb_cma: reserved 2048 MiB on node 0
>
> Without this patch:
> [ 0.000000] Command line: root=/dev/vda3 rootflags=subvol=/root systemd.unified_cgroup_hierarchy=1 enforcing=0 console=ttyS0,115200 hugetlb_cma=2G
> [ 0.002930] hugetlb_cma: reserve 2048 MiB, up to 2048 MiB per node
> [ 0.002933] cma: Reserved 2048 MiB at 0x00000003c0000000
> [ 0.002934] hugetlb_cma: reserved 2048 MiB on node 0
>
> v2:
> - switched to memblock_set_bottom_up(true), by Mike
> - start with 4GB, by Mike
>
> Signed-off-by: Roman Gushchin <guro@...com>
With one nit below
Reviewed-by: Mike Rapoport <rppt@...ux.ibm.com>
> ---
> mm/cma.c | 16 ++++++++++++++++
> 1 file changed, 16 insertions(+)
>
> diff --git a/mm/cma.c b/mm/cma.c
> index 7f415d7cda9f..21fd40c092f0 100644
> --- a/mm/cma.c
> +++ b/mm/cma.c
> @@ -337,6 +337,22 @@ int __init cma_declare_contiguous_nid(phys_addr_t base,
> limit = highmem_start;
> }
>
> + /*
> + * If there is enough memory, try a bottom-up allocation first.
> + * It will place the new cma area close to the start of the node
> + * and guarantee that the compaction is moving pages out of the
> + * cma area and not into it.
> + * Avoid using first 4GB to not interfere with constrained zones
> + * like DMA/DMA32.
> + */
> + if (!memblock_bottom_up() &&
> + memblock_end >= SZ_4G + size) {
This seems short enough to fit a single line
> + memblock_set_bottom_up(true);
> + addr = memblock_alloc_range_nid(size, alignment, SZ_4G,
> + limit, nid, true);
> + memblock_set_bottom_up(false);
> + }
> +
> if (!addr) {
> addr = memblock_alloc_range_nid(size, alignment, base,
> limit, nid, true);
> --
> 2.26.2
>
--
Sincerely yours,
Mike.
Powered by blists - more mailing lists