[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20231227123257.1170590-2-enachman@marvell.com>
Date: Wed, 27 Dec 2023 14:32:54 +0200
From: Elad Nachman <enachman@...vell.com>
To: <robh+dt@...nel.org>, <krzysztof.kozlowski+dt@...aro.org>,
<conor+dt@...nel.org>, <andrew@...n.ch>, <gregory.clement@...tlin.com>,
<sebastian.hesselbarth@...il.com>, <huziji@...vell.com>,
<ulf.hansson@...aro.org>, <catalin.marinas@....com>, <will@...nel.org>,
<adrian.hunter@...el.com>, <thunder.leizhen@...wei.com>,
<bhe@...hat.com>, <akpm@...ux-foundation.org>, <yajun.deng@...ux.dev>,
<chris.zjh@...wei.com>, <linux-mmc@...r.kernel.org>,
<devicetree@...r.kernel.org>, <linux-kernel@...r.kernel.org>,
<linux-arm-kernel@...ts.infradead.org>
CC: <enachman@...vell.com>, <cyuval@...vell.com>
Subject: [PATCH 1/4] arm64: mm: Fix SOCs with DDR starting above zero
From: Elad Nachman <enachman@...vell.com>
Some SOCs, like the Marvell AC5/X/IM, have a combination
of DDR starting at 0x2_0000_0000 coupled with DMA controllers
limited to 31 and 32 bit of addressing.
This requires to properly arrange ZONE_DMA and ZONE_DMA32 for
these SOCs, so swiotlb and coherent DMA allocation would work
properly.
Change initialization so device tree dma zone bits are taken as
function of offset from DRAM start, and when calculating the
maximal zone physical RAM address for physical DDR starting above
32-bit, combine the physical address start plus the zone mask
passed as parameter.
This creates the proper zone splitting for these SOCs:
0..2GB for ZONE_DMA
2GB..4GB for ZONE_DMA32
4GB..8GB for ZONE_NORMAL
Signed-off-by: Elad Nachman <enachman@...vell.com>
---
arch/arm64/mm/init.c | 20 +++++++++++++++-----
1 file changed, 15 insertions(+), 5 deletions(-)
diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
index 74c1db8ce271..8288c778916e 100644
--- a/arch/arm64/mm/init.c
+++ b/arch/arm64/mm/init.c
@@ -115,20 +115,21 @@ static void __init arch_reserve_crashkernel(void)
/*
* Return the maximum physical address for a zone accessible by the given bits
- * limit. If DRAM starts above 32-bit, expand the zone to the maximum
- * available memory, otherwise cap it at 32-bit.
+ * limit. If DRAM starts above 32-bit, expand the zone to the available memory
+ * start limited by the zone bits mask, otherwise cap it at 32-bit.
*/
static phys_addr_t __init max_zone_phys(unsigned int zone_bits)
{
phys_addr_t zone_mask = DMA_BIT_MASK(zone_bits);
phys_addr_t phys_start = memblock_start_of_DRAM();
+ phys_addr_t phys_end = memblock_end_of_DRAM();
if (phys_start > U32_MAX)
- zone_mask = PHYS_ADDR_MAX;
+ zone_mask = phys_start | zone_mask;
else if (phys_start > zone_mask)
zone_mask = U32_MAX;
- return min(zone_mask, memblock_end_of_DRAM() - 1) + 1;
+ return min(zone_mask, phys_end - 1) + 1;
}
static void __init zone_sizes_init(void)
@@ -140,7 +141,16 @@ static void __init zone_sizes_init(void)
#ifdef CONFIG_ZONE_DMA
acpi_zone_dma_bits = fls64(acpi_iort_dma_get_max_cpu_address());
- dt_zone_dma_bits = fls64(of_dma_get_max_cpu_address(NULL));
+ /*
+ * When calculating the dma zone bits from the device tree, subtract
+ * the DRAM start address, in case it does not start from address
+ * zero. This way. we pass only the zone size related bits to
+ * max_zone_phys(), which will add them to the base of the DRAM.
+ * This prevents miscalculations on arm64 SOCs which combines
+ * DDR starting above 4GB with memory controllers limited to
+ * 32-bits or less:
+ */
+ dt_zone_dma_bits = fls64(of_dma_get_max_cpu_address(NULL) - memblock_start_of_DRAM());
zone_dma_bits = min3(32U, dt_zone_dma_bits, acpi_zone_dma_bits);
arm64_dma_phys_limit = max_zone_phys(zone_dma_bits);
max_zone_pfns[ZONE_DMA] = PFN_DOWN(arm64_dma_phys_limit);
--
2.25.1
Powered by blists - more mailing lists