[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <40b0f059bbb62e4bd6fed33b3990def3d2aed124.camel@surriel.com>
Date: Tue, 16 Sep 2025 18:25:11 -0400
From: Rik van Riel <riel@...riel.com>
To: Frank van der Linden <fvdl@...gle.com>, akpm@...ux-foundation.org,
muchun.song@...ux.dev, linux-mm@...ck.org, linux-kernel@...r.kernel.org
Cc: hannes@...xchg.org, david@...hat.com, roman.gushchin@...ux.dev
Subject: Re: [RFC PATCH 04/12] mm/cma: keep a global sorted list of CMA
ranges
On Mon, 2025-09-15 at 19:51 +0000, Frank van der Linden wrote:
> In order to walk through CMA areas efficiently, it is useful
> to keep a global sorted list of ranges.
>
> Create this list when activating the areas.
>
> Since users of this list may want to reference the CMA area
> the range came from, there needs to be a link from the range
> to that area. So, store a pointer to the CMA structure in
> the cma_memrange structure. This also reduces the number
> of arguments to a few internal functions.
>
> Signed-off-by: Frank van der Linden <fvdl@...gle.com>
>
> static int __init cma_init_reserved_areas(void)
> {
> - int i;
> + int i, r, nranges;
> + struct cma *cma;
> + struct cma_memrange *cmr;
> +
> + nranges = 0;
> + for (i = 0; i < cma_area_count; i++) {
> + cma = &cma_areas[i];
> + nranges += cma->nranges;
> + cma_activate_area(cma);
> + }
> +
> + cma_ranges = kcalloc(nranges, sizeof(*cma_ranges),
> GFP_KERNEL);
> + cma_nranges = 0;
> + for (i = 0; i < cma_area_count; i++) {
> + cma = &cma_areas[i];
> + for (r = 0; r < cma->nranges; r++) {
> + cmr = &cma->ranges[r];
> + cma_ranges[cma_nranges++] = cmr;
> + }
> + }
>
> - for (i = 0; i < cma_area_count; i++)
> - cma_activate_area(&cma_areas[i]);
> + sort(cma_ranges, cma_nranges, sizeof(*cma_ranges), cmprange,
> NULL);
>
I am guessing that this is safe because cma_init_reserved_areas
is an initcall, which is only called once?
Is that correct?
Is it worth documenting why this function creates
a sorted array of CMA ranges?
>
> return 0;
> }
>
--
All Rights Reversed.
Powered by blists - more mailing lists