lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <3a8e1101-469b-4686-b160-6fdb1737a636@arm.com>
Date: Thu, 5 Jun 2025 14:37:48 +0100
From: Robin Murphy <robin.murphy@....com>
To: Tomeu Vizoso <tomeu@...euvizoso.net>
Cc: Daniel Stone <daniel@...ishbar.org>, Rob Herring <robh@...nel.org>,
 Maxime Ripard <mripard@...nel.org>, David Airlie <airlied@...il.com>,
 Simona Vetter <simona@...ll.ch>,
 Sebastian Reichel <sebastian.reichel@...labora.com>,
 Nicolas Frattaroli <nicolas.frattaroli@...labora.com>,
 Kever Yang <kever.yang@...k-chips.com>,
 linux-arm-kernel@...ts.infradead.org, linux-rockchip@...ts.infradead.org,
 linux-kernel@...r.kernel.org, dri-devel@...ts.freedesktop.org
Subject: Re: [PATCH v6 06/10] accel/rocket: Add IOCTL for BO creation

On 05/06/2025 8:41 am, Tomeu Vizoso wrote:
[...]
>> In fact this is precisely the usage model I would suggest for this sort
>> of thing, and IIRC I had a similar conversation with the Ethos driver
>> folks a few years back. Running your own IOMMU domain is no biggie, see
>> several other DRM drivers (including rockchip). As long as you have a
>> separate struct device per NPU core then indeed it should be perfectly
>> straightforward to maintain distinct IOMMU domains per job, and
>> attach/detach them as part of scheduling the jobs on and off the cores.
>> Looks like rockchip-iommu supports cross-instance attach, so if
>> necessary you should already be OK to have multiple cores working on the
>> same job at once, without needing extra work at the IOMMU end.
> 
> Ok, so if I understood it correctly, the plan would be for each DRM
> client to have one IOMMU domain per each core (each core has its own
> IOMMU), and map all its buffers in all these domains.
> 
> Then when a job is about to be scheduled on a given core, make sure
> that the IOMMU for that core uses the domain for the client that
> submitted the job.
> 
> Did I get that right?

It should only need a single IOMMU domain per DRM client, so no faffing 
about replicating mappings. iommu_paging_domain_alloc() does need *an* 
appropriate target device so it can identify the right IOMMU driver, but 
that in itself doesn't preclude attaching other devices to the resulting 
domain as well as (or even instead of) the nominal one. In general, not 
all IOMMU drivers support cross-instance attach so it may fail with 
-EINVAL, and *that*'s when you might need to fall back to allocating 
multiple per-instance domains - but as I say since this is a 
Rockchip-specific driver where the IOMMU *is* intended to support this 
already, you don't need to bother.

Thanks,
Robin.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ