[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20210806155523.50429-1-sven@svenpeter.dev>
Date: Fri, 6 Aug 2021 17:55:20 +0200
From: Sven Peter <sven@...npeter.dev>
To: iommu@...ts.linux-foundation.org
Cc: Sven Peter <sven@...npeter.dev>, Joerg Roedel <joro@...tes.org>,
Will Deacon <will@...nel.org>,
Robin Murphy <robin.murphy@....com>,
Arnd Bergmann <arnd@...nel.org>,
Mohamed Mediouni <mohamed.mediouni@...amail.com>,
Alexander Graf <graf@...zon.com>,
Hector Martin <marcan@...can.st>, linux-kernel@...r.kernel.org
Subject: [RFC PATCH 0/3] iommu/dma-iommu: Support IOMMU page size larger than the CPU page size
Hi,
On the Apple M1 there's this slightly annoying situation where the DART IOMMU
has a hard-wired page size of 16KB. Additionally, the DARTs for some hardware
(USB A ports, WiFi, Ethernet, Thunderbolt PCIe) cannot be switched to bypass
mode and it's also not easily possible to program a software bypass mode.
This is a problem for kernels configured with 4K pages. Unfortunately,
most distributions ship with those by default.
There's not much that can be done for IOMMU_DOMAIN_UNMANAGED domains since
most API clients likely expect to be able to map single CPU pages.
For IOMMU_DOMAIN_DMA domains however, dma-iommu.c is the only code that
uses the raw IOMMU API to manage these domains and can possibly be adapted
to still work correctly.
Essentially, I changed some relevant alignments to happen with respect to both
PAGE_SIZE and iovad->granule. The sglist code also can't use the optimization
for a single IOVA allocation anymore since most phys_addrs will not be aligned
to the IOMMU page size.
I'd like to get some early feedback on this approach to see if it's feasible
to continue working on this or if a different approach will work better or if
this setup just won't be supported.
I'm not very confident I've covered all necessary cases but I'll take
a closer look at every function in dma-iommu.c if there's a chance that
this will be accepted eventually. The current changes are enough to boot
from a USB device and use the Ethernet adapter on my M1 Mini with 4kb pages
though.
One issue I see is that this will end up wasting memory. There's e.g.
dma_pool_*, which will dma_alloc_coherent PAGE_SIZE bytes and stuff the individual
allocations into those buffers. These will get padded to SZ_16K but dma_pool will
be completely unaware that it got 4x as much memory as requested and will leave
it unused :-(
The other issue I'm aware of is v4l2 which expects that a page-aligned sglist
can be represented contiguously in IOVA space [1].
Best,
Sven
[1] https://lore.kernel.org/linux-iommu/0d20bd6b-d0a1-019c-6398-b12f83f4fdf7@arm.com/
Sven Peter (3):
iommu: Move IOMMU pagesize check to attach_device
iommu/dma-iommu: Support iovad->granule > PAGE_SIZE
iommu: Introduce __IOMMU_DOMAIN_LARGE_PAGES
drivers/iommu/dma-iommu.c | 87 ++++++++++++++++++++++++++++++++++-----
drivers/iommu/iommu.c | 36 ++++++++++++++--
drivers/iommu/iova.c | 7 ++--
include/linux/iommu.h | 14 ++++---
4 files changed, 123 insertions(+), 21 deletions(-)
--
2.25.1
Powered by blists - more mailing lists