[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <mafs0r0h36utm.fsf@amazon.de>
Date: Fri, 23 Feb 2024 16:53:25 +0100
From: Pratyush Yadav <ptyadav@...zon.de>
To: Alexander Graf <graf@...zon.com>
CC: <linux-kernel@...r.kernel.org>, <linux-trace-kernel@...r.kernel.org>,
<linux-mm@...ck.org>, <devicetree@...r.kernel.org>,
<linux-arm-kernel@...ts.infradead.org>, <kexec@...ts.infradead.org>,
<linux-doc@...r.kernel.org>, <x86@...nel.org>, Eric Biederman
<ebiederm@...ssion.com>, "H . Peter Anvin" <hpa@...or.com>, Andy Lutomirski
<luto@...nel.org>, Peter Zijlstra <peterz@...radead.org>, Steven Rostedt
<rostedt@...dmis.org>, Andrew Morton <akpm@...ux-foundation.org>, "Mark
Rutland" <mark.rutland@....com>, Tom Lendacky <thomas.lendacky@....com>,
Ashish Kalra <ashish.kalra@....com>, James Gowans <jgowans@...zon.com>,
Stanislav Kinsburskii <skinsburskii@...ux.microsoft.com>, <arnd@...db.de>,
<pbonzini@...hat.com>, <madvenka@...ux.microsoft.com>, Anthony Yznaga
<anthony.yznaga@...cle.com>, Usama Arif <usama.arif@...edance.com>, "David
Woodhouse" <dwmw@...zon.co.uk>, Benjamin Herrenschmidt
<benh@...nel.crashing.org>, Rob Herring <robh+dt@...nel.org>, "Krzysztof
Kozlowski" <krzk@...nel.org>
Subject: Re: [PATCH v3 02/17] memblock: Declare scratch memory as CMA
Hi,
On Wed, Jan 17 2024, Alexander Graf wrote:
> When we finish populating our memory, we don't want to lose the scratch
> region as memory we can use for useful data. Do do that, we mark it as
> CMA memory. That means that any allocation within it only happens with
> movable memory which we can then happily discard for the next kexec.
>
> That way we don't lose the scratch region's memory anymore for
> allocations after boot.
>
> Signed-off-by: Alexander Graf <graf@...zon.com>
>
[...]
> @@ -2188,6 +2185,16 @@ static void __init __free_pages_memory(unsigned long start, unsigned long end)
> }
> }
>
> +static void mark_phys_as_cma(phys_addr_t start, phys_addr_t end)
> +{
> + ulong start_pfn = pageblock_start_pfn(PFN_DOWN(start));
> + ulong end_pfn = pageblock_align(PFN_UP(end));
> + ulong pfn;
> +
> + for (pfn = start_pfn; pfn < end_pfn; pfn += pageblock_nr_pages)
> + set_pageblock_migratetype(pfn_to_page(pfn), MIGRATE_CMA);
This fails to compile if CONFIG_CMA is disabled. I think you should add
it as a dependency for CONFIG_MEMBLOCK_SCRATCH.
> +}
> +
> static unsigned long __init __free_memory_core(phys_addr_t start,
> phys_addr_t end)
> {
> @@ -2249,6 +2256,17 @@ static unsigned long __init free_low_memory_core_early(void)
>
> memmap_init_reserved_pages();
>
> + if (IS_ENABLED(CONFIG_MEMBLOCK_SCRATCH)) {
> + /*
> + * Mark scratch mem as CMA before we return it. That way we
> + * ensure that no kernel allocations happen on it. That means
> + * we can reuse it as scratch memory again later.
> + */
> + __for_each_mem_range(i, &memblock.memory, NULL, NUMA_NO_NODE,
> + MEMBLOCK_SCRATCH, &start, &end, NULL)
> + mark_phys_as_cma(start, end);
> + }
> +
> /*
> * We need to use NUMA_NO_NODE instead of NODE_DATA(0)->node_id
> * because in some case like Node0 doesn't have RAM installed
--
Regards,
Pratyush Yadav
Amazon Development Center Germany GmbH
Krausenstr. 38
10117 Berlin
Geschaeftsfuehrung: Christian Schlaeger, Jonathan Weiss
Eingetragen am Amtsgericht Charlottenburg unter HRB 149173 B
Sitz: Berlin
Ust-ID: DE 289 237 879
Powered by blists - more mailing lists