[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250821131304.GM802098@nvidia.com>
Date: Thu, 21 Aug 2025 10:13:04 -0300
From: Jason Gunthorpe <jgg@...dia.com>
To: Nicolin Chen <nicolinc@...dia.com>
Cc: robin.murphy@....com, joro@...tes.org, bhelgaas@...gle.com,
will@...nel.org, robin.clark@....qualcomm.com, yong.wu@...iatek.com,
matthias.bgg@...il.com, angelogioacchino.delregno@...labora.com,
thierry.reding@...il.com, vdumpa@...dia.com, jonathanh@...dia.com,
rafael@...nel.org, lenb@...nel.org, kevin.tian@...el.com,
yi.l.liu@...el.com, baolu.lu@...ux.intel.com,
linux-arm-kernel@...ts.infradead.org, iommu@...ts.linux.dev,
linux-kernel@...r.kernel.org, linux-arm-msm@...r.kernel.org,
linux-mediatek@...ts.infradead.org, linux-tegra@...r.kernel.org,
linux-acpi@...r.kernel.org, linux-pci@...r.kernel.org,
patches@...ts.linux.dev, pjaroszynski@...dia.com, vsethi@...dia.com,
helgaas@...nel.org, etzhao1900@...il.com
Subject: Re: [PATCH v3 3/5] iommu: Add iommu_get_domain_for_dev_locked()
helper
On Tue, Aug 19, 2025 at 10:22:20AM -0700, Nicolin Chen wrote:
> Yet, I also see some other cases that cannot be helped with the
> type function. Just listing a few:
Probably several query functions are needed that can be lock safe
> 1) domain matching (and type)
> drivers/gpu/drm/tegra/drm.c:965: if (domain && domain->type != IOMMU_DOMAIN_IDENTITY &&
> drivers/gpu/drm/tegra/drm.c:966: domain != tegra->domain)
> drivers/gpu/drm/tegra/drm.c-967- return 0;
is attached
> 2) page size
> drivers/gpu/drm/arm/malidp_planes.c:307: mmu_dom = iommu_get_domain_for_dev(mp->base.dev->dev);
> drivers/gpu/drm/arm/malidp_planes.c-308- if (mmu_dom)
> drivers/gpu/drm/arm/malidp_planes.c-309- return mmu_dom->pgsize_bitmap;
return page size bitmap
> 3) iommu_iova_to_phys
> drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c:2597: dom = iommu_get_domain_for_dev(adev->dev);
> drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c-2598-
> drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c-2599- while (size) {
> drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c-2600- phys_addr_t addr = *pos & PAGE_MASK;
> drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c-2601- loff_t off = *pos & ~PAGE_MASK;
> drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c-2602- size_t bytes = PAGE_SIZE - off;
> drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c-2603- unsigned long pfn;
> drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c-2604- struct page *p;
> drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c-2605- void *ptr;
> drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c-2606-
> drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c-2607- bytes = min(bytes, size);
> drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c-2608-
> drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c:2609: addr = dom ? iommu_iova_to_phys(dom, addr) : addr;
safe iova to phys directly from a struct device
> 4) map/unmap
> drivers/net/ipa/ipa_mem.c:465: domain = iommu_get_domain_for_dev(dev);
> drivers/net/ipa/ipa_mem.c-466- if (!domain) {
> drivers/net/ipa/ipa_mem.c-467- dev_err(dev, "no IOMMU domain found for IMEM\n");
> drivers/net/ipa/ipa_mem.c-468- return -EINVAL;
> drivers/net/ipa/ipa_mem.c-469- }
> drivers/net/ipa/ipa_mem.c-470-
> drivers/net/ipa/ipa_mem.c-471- /* Align the address down and the size up to page boundaries */
> drivers/net/ipa/ipa_mem.c-472- phys = addr & PAGE_MASK;
> drivers/net/ipa/ipa_mem.c-473- size = PAGE_ALIGN(size + addr - phys);
> drivers/net/ipa/ipa_mem.c-474- iova = phys; /* We just want a direct mapping */
> drivers/net/ipa/ipa_mem.c-475-
> drivers/net/ipa/ipa_mem.c-476- ret = iommu_map(domain, iova, phys, size, IOMMU_READ | IOMMU_WRITE,
> ...
> drivers/net/ipa/ipa_mem.c:495: domain = iommu_get_domain_for_dev(dev);
> drivers/net/ipa/ipa_mem.c-496- if (domain) {
> drivers/net/ipa/ipa_mem.c-497- size_t size;
> drivers/net/ipa/ipa_mem.c-498-
> drivers/net/ipa/ipa_mem.c-499- size = iommu_unmap(domain, ipa->imem_iova, ipa->imem_size);
Broken! Illegal to call iommu_map on a DMA API domain.
This is exactly the sort of abuse I would like to see made imposible :(
If it really needs something like this then it needs a proper dma api
interface to do it and properly reserve the iova from the allocator.
Jason
Powered by blists - more mailing lists