[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20230414212926.2336072-1-f.fainelli@gmail.com>
Date: Fri, 14 Apr 2023 14:29:25 -0700
From: Florian Fainelli <f.fainelli@...il.com>
To: linux-kernel@...r.kernel.org
Cc: Doug Berger <opendmb@...il.com>,
Florian Fainelli <f.fainelli@...il.com>,
Christoph Hellwig <hch@....de>,
Marek Szyprowski <m.szyprowski@...sung.com>,
Robin Murphy <robin.murphy@....com>,
Claire Chang <tientzu@...omium.org>,
Konrad Rzeszutek Wilk <konrad.wilk@...cle.com>,
iommu@...ts.linux.dev (open list:DMA MAPPING HELPERS)
Subject: [PATCH] swiotlb: Relocate PageHighMem test away from rmem_swiotlb_setup
From: Doug Berger <opendmb@...il.com>
The reservedmem_of_init_fn's are invoked very early at boot before the
memory zones have even been defined. This makes it inappropriate to test
whether the page corresponding to a PFN is in ZONE_HIGHMEM from within
one.
Removing the check allows an ARM 32-bit kernel with SPARSEMEM enabled to
boot properly since otherwise we would be de-referencing an
uninitialized sparsemem map to perform pfn_to_page() check.
The arm64 architecture happens to work (and also has no high memory) but
other 32-bit architectures could also be having similar issues.
While it would be nice to provide early feedback about a reserved DMA
pool residing in highmem, it is not possible to do that until the first
time we try to use it, which is where the check is moved to.
Fixes: 0b84e4f8b793 ("swiotlb: Add restricted DMA pool initialization")
Signed-off-by: Doug Berger <opendmb@...il.com>
Signed-off-by: Florian Fainelli <f.fainelli@...il.com>
---
kernel/dma/swiotlb.c | 10 +++++-----
1 file changed, 5 insertions(+), 5 deletions(-)
diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index dac42a2ad588..2bb9e3b02380 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -998,6 +998,11 @@ static int rmem_swiotlb_device_init(struct reserved_mem *rmem,
/* Set Per-device io tlb area to one */
unsigned int nareas = 1;
+ if (PageHighMem(pfn_to_page(PHYS_PFN(rmem->base)))) {
+ dev_err(dev, "Restricted DMA pool must be accessible within the linear mapping.");
+ return -EINVAL;
+ }
+
/*
* Since multiple devices can share the same pool, the private data,
* io_tlb_mem struct, will be initialized by the first device attached
@@ -1059,11 +1064,6 @@ static int __init rmem_swiotlb_setup(struct reserved_mem *rmem)
of_get_flat_dt_prop(node, "no-map", NULL))
return -EINVAL;
- if (PageHighMem(pfn_to_page(PHYS_PFN(rmem->base)))) {
- pr_err("Restricted DMA pool must be accessible within the linear mapping.");
- return -EINVAL;
- }
-
rmem->ops = &rmem_swiotlb_ops;
pr_info("Reserved memory: created restricted DMA pool at %pa, size %ld MiB\n",
&rmem->base, (unsigned long)rmem->size / SZ_1M);
--
2.34.1
Powered by blists - more mailing lists