[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250924191055.GJ2617119@nvidia.com>
Date: Wed, 24 Sep 2025 16:10:55 -0300
From: Jason Gunthorpe <jgg@...dia.com>
To: Nicolin Chen <nicolinc@...dia.com>
Cc: joro@...tes.org, bhelgaas@...gle.com, suravee.suthikulpanit@....com,
will@...nel.org, robin.murphy@....com, sven@...nel.org,
j@...nau.net, alyssa@...enzweig.io, neal@...pa.dev,
robin.clark@....qualcomm.com, m.szyprowski@...sung.com,
krzk@...nel.org, alim.akhtar@...sung.com, dwmw2@...radead.org,
baolu.lu@...ux.intel.com, kevin.tian@...el.com,
yong.wu@...iatek.com, matthias.bgg@...il.com,
angelogioacchino.delregno@...labora.com, tjeznach@...osinc.com,
paul.walmsley@...ive.com, palmer@...belt.com, aou@...s.berkeley.edu,
alex@...ti.fr, heiko@...ech.de, schnelle@...ux.ibm.com,
mjrosato@...ux.ibm.com, gerald.schaefer@...ux.ibm.com,
orsonzhai@...il.com, baolin.wang@...ux.alibaba.com,
zhang.lyra@...il.com, wens@...e.org, jernej.skrabec@...il.com,
samuel@...lland.org, jean-philippe@...aro.org, rafael@...nel.org,
lenb@...nel.org, yi.l.liu@...el.com, cwabbott0@...il.com,
quic_pbrahma@...cinc.com, iommu@...ts.linux.dev,
linux-kernel@...r.kernel.org, asahi@...ts.linux.dev,
linux-arm-kernel@...ts.infradead.org, linux-arm-msm@...r.kernel.org,
linux-samsung-soc@...r.kernel.org,
linux-mediatek@...ts.infradead.org, linux-riscv@...ts.infradead.org,
linux-rockchip@...ts.infradead.org, linux-s390@...r.kernel.org,
linux-sunxi@...ts.linux.dev, linux-tegra@...r.kernel.org,
virtualization@...ts.linux.dev, linux-acpi@...r.kernel.org,
linux-pci@...r.kernel.org, patches@...ts.linux.dev,
vsethi@...dia.com, helgaas@...nel.org, etzhao1900@...il.com
Subject: Re: [PATCH v4 5/7] iommu: Add iommu_get_domain_for_dev_locked()
helper
On Sun, Aug 31, 2025 at 04:31:57PM -0700, Nicolin Chen wrote:
> diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
> index ea2ef53bd4fef..99680cdb57265 100644
> --- a/drivers/iommu/dma-iommu.c
> +++ b/drivers/iommu/dma-iommu.c
> @@ -2097,7 +2097,7 @@ EXPORT_SYMBOL_GPL(dma_iova_destroy);
>
> void iommu_setup_dma_ops(struct device *dev)
> {
> - struct iommu_domain *domain = iommu_get_domain_for_dev(dev);
> + struct iommu_domain *domain = iommu_get_domain_for_dev_locked(dev);
Lets have another patch to tidy this. This function can only be called on
the default_domain. We can trivally pass it in. In all three cases the
default domain was just attached to the device.
diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
index 060ebe330ee163..93e82d5776ff57 100644
--- a/drivers/iommu/iommu.c
+++ b/drivers/iommu/iommu.c
@@ -620,7 +620,7 @@ static int __iommu_probe_device(struct device *dev, struct list_head *group_list
}
if (group->default_domain)
- iommu_setup_dma_ops(dev);
+ iommu_setup_dma_ops(dev, group->default_domain);
mutex_unlock(&group->mutex);
@@ -1908,7 +1908,7 @@ static int bus_iommu_probe(const struct bus_type *bus)
return ret;
}
for_each_group_device(group, gdev)
- iommu_setup_dma_ops(gdev->dev);
+ iommu_setup_dma_ops(gdev->dev, group->default_domain);
mutex_unlock(&group->mutex);
/*
@@ -3104,7 +3104,7 @@ static ssize_t iommu_group_store_type(struct iommu_group *group,
/* Make sure dma_ops is appropriatley set */
for_each_group_device(group, gdev)
- iommu_setup_dma_ops(gdev->dev);
+ iommu_setup_dma_ops(gdev->dev, group->default_domain);
out_unlock:
mutex_unlock(&group->mutex);
> +/* Caller must be a general/external function that isn't an IOMMU callback */
> struct iommu_domain *iommu_get_domain_for_dev(struct device *dev)
Maybe a kdoc?
/**
* iommu_get_domain_for_dev() - Return the DMA API domain pointer
* @dev - Device to query
*
* This function can be called within a driver bound to dev. The returned
* pointer is valid for the lifetime of the bound driver.
*
* It should not be called by drivers with driver_managed_dma = true.
*/
struct iommu_domain *iommu_get_domain_for_dev(struct device *dev)
I really wanted to say this should just always return the
default_domain, but it looks like host1x_client_iommu_detach() is the
only place outside the iommu drivers that would be unhappy with that.
Jason
Powered by blists - more mailing lists