[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <2fa9d7954a99b018a32314b9baab25ba18504f15.1712642324.git.baruch@tkos.co.il>
Date: Tue, 9 Apr 2024 09:17:58 +0300
From: Baruch Siach <baruch@...s.co.il>
To: Christoph Hellwig <hch@....de>,
Marek Szyprowski <m.szyprowski@...sung.com>,
Rob Herring <robh@...nel.org>,
Saravana Kannan <saravanak@...gle.com>,
Catalin Marinas <catalin.marinas@....com>,
Will Deacon <will@...nel.org>
Cc: Baruch Siach <baruch@...s.co.il>,
Robin Murphy <robin.murphy@....com>,
iommu@...ts.linux.dev,
devicetree@...r.kernel.org,
linux-arm-kernel@...ts.infradead.org,
linux-kernel@...r.kernel.org,
linuxppc-dev@...ts.ozlabs.org,
linux-s390@...r.kernel.org,
Petr Tesařík <petr@...arici.cz>,
Ramon Fried <ramon@...reality.ai>,
Elad Nachman <enachman@...vell.com>
Subject: [PATCH RFC v2 5/5] arm64: mm: take DMA zone offset into account
Commit 791ab8b2e3db ("arm64: Ignore any DMA offsets in the
max_zone_phys() calculation") made DMA/DMA32 zones span the entire RAM
when RAM starts above 32-bits. This breaks hardware with DMA area that
start above 32-bits. But the commit log says that "we haven't noticed
any such hardware". It turns out that such hardware does exist.
One such platform has RAM starting at 32GB with an internal bus that has
the following DMA limits:
#address-cells = <2>;
#size-cells = <2>;
dma-ranges = <0x00 0xc0000000 0x08 0x00000000 0x00 0x40000000>;
Devices under this bus can see 1GB of DMA range between 3GB-4GB in each
device address space. This range is mapped to CPU memory at 32GB-33GB.
With current code DMA allocations for devices under this bus are not
limited to DMA area, leading to run-time allocation failure.
Modify 'zone_dma_bits' calculation (via dt_zone_dma_bits) to only cover
the actual DMA area starting at 'zone_dma_off'. Use the newly introduced
'min' parameter of of_dma_get_cpu_limits() to set 'zone_dma_off'.
DMA32 zone is useless in this configuration, so make its limit the same
as the DMA zone when the lower DMA limit is higher than 32-bits.
The result is DMA zone that properly reflects the hardware constraints
as follows:
[ 0.000000] Zone ranges:
[ 0.000000] DMA [mem 0x0000000800000000-0x000000083fffffff]
[ 0.000000] DMA32 empty
[ 0.000000] Normal [mem 0x0000000840000000-0x0000000bffffffff]
Suggested-by: Catalin Marinas <catalin.marinas@....com>
Signed-off-by: Baruch Siach <baruch@...s.co.il>
---
arch/arm64/mm/init.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
index 77e942ca578b..cd283ae0178d 100644
--- a/arch/arm64/mm/init.c
+++ b/arch/arm64/mm/init.c
@@ -128,9 +128,11 @@ static void __init zone_sizes_init(void)
#ifdef CONFIG_ZONE_DMA
acpi_zone_dma_limit = acpi_iort_dma_get_max_cpu_address();
- of_dma_get_cpu_limits(NULL, &dt_zone_dma_limit, NULL);
+ of_dma_get_cpu_limits(NULL, &dt_zone_dma_limit, &zone_dma_base);
zone_dma_limit = min(dt_zone_dma_limit, acpi_zone_dma_limit);
arm64_dma_phys_limit = max_zone_phys(zone_dma_limit);
+ if (zone_dma_base > U32_MAX)
+ dma32_phys_limit = arm64_dma_phys_limit;
max_zone_pfns[ZONE_DMA] = PFN_DOWN(arm64_dma_phys_limit);
#endif
#ifdef CONFIG_ZONE_DMA32
--
2.43.0
Powered by blists - more mailing lists