[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <45faaadd-eda7-464f-96ff-7324f566669e@www.fastmail.com>
Date: Fri, 26 Mar 2021 18:51:55 +0100
From: "Sven Peter" <sven@...npeter.dev>
To: "Robin Murphy" <robin.murphy@....com>,
"Mark Kettenis" <mark.kettenis@...all.nl>,
"Arnd Bergmann" <arnd@...nel.org>
Cc: "Rob Herring" <robh@...nel.org>, iommu@...ts.linux-foundation.org,
joro@...tes.org, "Will Deacon" <will@...nel.org>,
"Hector Martin" <marcan@...can.st>,
"Marc Zyngier" <maz@...nel.org>, mohamed.mediouni@...amail.com,
stan@...ellium.com, linux-arm-kernel@...ts.infradead.org,
linux-kernel@...r.kernel.org, devicetree@...r.kernel.org
Subject: Re: [PATCH 0/3] Apple M1 DART IOMMU driver
On Fri, Mar 26, 2021, at 18:34, Robin Murphy wrote:
> On 2021-03-26 17:26, Mark Kettenis wrote:
> >
> > Anyway, from my viewpoint having the information about the IOVA
> > address space sit on the devices makes little sense. This information
> > is needed by the DART driver, and there is no direct cnnection from
> > the DART to the individual devices in the devicetree. The "iommus"
> > property makes a connection in the opposite direction.
>
> What still seems unclear is whether these addressing limitations are a
> property of the DART input interface, the device output interface, or
> the interconnect between them. Although the observable end result
> appears more or less the same either way, they are conceptually
> different things which we have different abstractions to deal with.
>
> Robin.
>
I'm not really sure if there is any way for us to figure out where these
limitation comes from though.
I've done some more experiments and looked at all DART nodes in Apple's Device
Tree though. It seems that most (if not all) masters only connect 32 address
lines even though the iommu can handle a much larger address space. I'll therefore
remove the code to handle the full space for v2 since it's essentially dead
code that can't be tested anyway.
There are some exceptions though:
There are the PCIe DARTs which have a different limitation which could be
encoded as 'dma-ranges' in the pci bus node:
name base size
dart-apcie1: 00100000 3fe00000
dart-apcie2: 00100000 3fe00000
dart-apcie0: 00100000 3fe00000
dart-apciec0: 00004000 7fffc000
dart-apciec1: 80000000 7fffc000
Then there are also these display controller DARTs. If we wanted to use dma-ranges
we could just put them in a single sub bus:
name base size
dart-disp0: 00000000 fc000000
dart-dcp: 00000000 fc000000
dart-dispext0: 00000000 fc000000
dart-dcpext: 00000000 fc000000
And finally we have these strange ones which might eventually each require
another awkward sub-bus if we want to stick to the dma-ranges property.
name base size
dart-aop: 00030000 ffffffff ("always-on processor")
dart-pmp: 00000000 bff00000 (no idea yet)
dart-sio: 0021c000 fbde4000 (at least their Secure Enclave/TPM co-processor)
dart-ane: 00000000 e0000000 ("Neural Engine", their ML accelerator)
For all we know these limitations could even arise for different reasons.
(the secure enclave one looks like it might be imposed by the code running
on there).
Not really sure to proceed from here. I'll give the dma-ranges options a try
for v2 and see how that one works out but that's not going to help us understand
*why* these limitations exist.
At least I won't have to change much code if we agree on a different abstraction :)
The important ones for now are probably the USB and the PCIe ones. We'll need the
display ones after that and can probably ignore the strange ones for quite a while.
Best,
Sven
Powered by blists - more mailing lists