lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAAObsKCjv=K7Dk=QD+MAqwWUNyw_pCh2Eqij3Qwx1jzKoKg4zw@mail.gmail.com>
Date: Thu, 5 Jun 2025 18:32:06 +0200
From: Tomeu Vizoso <tomeu@...euvizoso.net>
To: Robin Murphy <robin.murphy@....com>
Cc: Daniel Stone <daniel@...ishbar.org>, Rob Herring <robh@...nel.org>, 
	Maxime Ripard <mripard@...nel.org>, David Airlie <airlied@...il.com>, Simona Vetter <simona@...ll.ch>, 
	Sebastian Reichel <sebastian.reichel@...labora.com>, 
	Nicolas Frattaroli <nicolas.frattaroli@...labora.com>, Kever Yang <kever.yang@...k-chips.com>, 
	linux-arm-kernel@...ts.infradead.org, linux-rockchip@...ts.infradead.org, 
	linux-kernel@...r.kernel.org, dri-devel@...ts.freedesktop.org
Subject: Re: [PATCH v6 06/10] accel/rocket: Add IOCTL for BO creation

On Thu, Jun 5, 2025 at 3:37 PM Robin Murphy <robin.murphy@....com> wrote:
>
> On 05/06/2025 8:41 am, Tomeu Vizoso wrote:
> [...]
> >> In fact this is precisely the usage model I would suggest for this sort
> >> of thing, and IIRC I had a similar conversation with the Ethos driver
> >> folks a few years back. Running your own IOMMU domain is no biggie, see
> >> several other DRM drivers (including rockchip). As long as you have a
> >> separate struct device per NPU core then indeed it should be perfectly
> >> straightforward to maintain distinct IOMMU domains per job, and
> >> attach/detach them as part of scheduling the jobs on and off the cores.
> >> Looks like rockchip-iommu supports cross-instance attach, so if
> >> necessary you should already be OK to have multiple cores working on the
> >> same job at once, without needing extra work at the IOMMU end.
> >
> > Ok, so if I understood it correctly, the plan would be for each DRM
> > client to have one IOMMU domain per each core (each core has its own
> > IOMMU), and map all its buffers in all these domains.
> >
> > Then when a job is about to be scheduled on a given core, make sure
> > that the IOMMU for that core uses the domain for the client that
> > submitted the job.
> >
> > Did I get that right?
>
> It should only need a single IOMMU domain per DRM client, so no faffing
> about replicating mappings. iommu_paging_domain_alloc() does need *an*
> appropriate target device so it can identify the right IOMMU driver, but
> that in itself doesn't preclude attaching other devices to the resulting
> domain as well as (or even instead of) the nominal one. In general, not
> all IOMMU drivers support cross-instance attach so it may fail with
> -EINVAL, and *that*'s when you might need to fall back to allocating
> multiple per-instance domains - but as I say since this is a
> Rockchip-specific driver where the IOMMU *is* intended to support this
> already, you don't need to bother.

Ok, I did just that and it's working great in my testing:

I create a domain when the client opens the DRM connection and map all
its BOs to it. Then when a job is about to start, I detach whatever
domain was attached to the core's group and attach that client's
domain.

Will send a v7 with it in a couple of days.

Thanks!

Tomeu

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ