[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <60970315-613f-4e62-8923-e162c29d9362@linux.intel.com>
Date: Wed, 12 Nov 2025 14:18:09 +0800
From: Baolu Lu <baolu.lu@...ux.intel.com>
To: Nicolin Chen <nicolinc@...dia.com>, joro@...tes.org, afael@...nel.org,
bhelgaas@...gle.com, alex@...zbot.org, jgg@...dia.com, kevin.tian@...el.com
Cc: will@...nel.org, robin.murphy@....com, lenb@...nel.org,
linux-arm-kernel@...ts.infradead.org, iommu@...ts.linux.dev,
linux-kernel@...r.kernel.org, linux-acpi@...r.kernel.org,
linux-pci@...r.kernel.org, kvm@...r.kernel.org, patches@...ts.linux.dev,
pjaroszynski@...dia.com, vsethi@...dia.com, helgaas@...nel.org,
etzhao1900@...il.com
Subject: Re: [PATCH v5 4/5] iommu: Introduce iommu_dev_reset_prepare() and
iommu_dev_reset_done()
On 11/11/25 13:12, Nicolin Chen wrote:
> +/**
> + * iommu_dev_reset_prepare() - Block IOMMU to prepare for a device reset
> + * @dev: device that is going to enter a reset routine
> + *
> + * When certain device is entering a reset routine, it wants to block any IOMMU
> + * activity during the reset routine. This includes blocking any translation as
> + * well as cache invalidation (especially the device cache).
> + *
> + * This function attaches all RID/PASID of the device's to IOMMU_DOMAIN_BLOCKED
> + * allowing any blocked-domain-supporting IOMMU driver to pause translation and
> + * cahce invalidation, but leaves the software domain pointers intact so later
> + * the iommu_dev_reset_done() can restore everything.
> + *
> + * Return: 0 on success or negative error code if the preparation failed.
> + *
> + * Caller must use iommu_dev_reset_prepare() and iommu_dev_reset_done() together
> + * before/after the core-level reset routine, to unset the resetting_domain.
> + *
> + * These two functions are designed to be used by PCI reset functions that would
> + * not invoke any racy iommu_release_device(), since PCI sysfs node gets removed
> + * before it notifies with a BUS_NOTIFY_REMOVED_DEVICE. When using them in other
> + * case, callers must ensure there will be no racy iommu_release_device() call,
> + * which otherwise would UAF the dev->iommu_group pointer.
> + */
> +int iommu_dev_reset_prepare(struct device *dev)
> +{
> + struct iommu_group *group = dev->iommu_group;
> + unsigned long pasid;
> + void *entry;
> + int ret = 0;
> +
> + if (!dev_has_iommu(dev))
> + return 0;
Nit: This interface is only for PCI layer, so why not just
if (WARN_ON(!dev_is_pci(dev)))
return -EINVAL;
?
> +
> + guard(mutex)(&group->mutex);
> +
> + /*
> + * Once the resetting_domain is set, any concurrent attachment to this
> + * iommu_group will be rejected, which would break the attach routines
> + * of the sibling devices in the same iommu_group. So, skip this case.
> + */
> + if (dev_is_pci(dev)) {
> + struct group_device *gdev;
> +
> + for_each_group_device(group, gdev) {
> + if (gdev->dev != dev)
> + return 0;
> + }
> + }
With above dev_is_pci() check, here it can simply be,
if (list_count_nodes(&group->devices) != 1)
return 0;
> +
> + /* Re-entry is not allowed */
> + if (WARN_ON(group->resetting_domain))
> + return -EBUSY;
> +
> + ret = __iommu_group_alloc_blocking_domain(group);
> + if (ret)
> + return ret;
> +
> + /* Stage RID domain at blocking_domain while retaining group->domain */
> + if (group->domain != group->blocking_domain) {
> + ret = __iommu_attach_device(group->blocking_domain, dev,
> + group->domain);
> + if (ret)
> + return ret;
> + }
> +
> + /*
> + * Stage PASID domains at blocking_domain while retaining pasid_array.
> + *
> + * The pasid_array is mostly fenced by group->mutex, except one reader
> + * in iommu_attach_handle_get(), so it's safe to read without xa_lock.
> + */
> + xa_for_each_start(&group->pasid_array, pasid, entry, 1)
> + iommu_remove_dev_pasid(dev, pasid,
> + pasid_array_entry_to_domain(entry));
> +
> + group->resetting_domain = group->blocking_domain;
> + return ret;
> +}
> +EXPORT_SYMBOL_GPL(iommu_dev_reset_prepare);
> +
> +/**
> + * iommu_dev_reset_done() - Restore IOMMU after a device reset is finished
> + * @dev: device that has finished a reset routine
> + *
> + * When certain device has finished a reset routine, it wants to restore its
> + * IOMMU activity, including new translation as well as cache invalidation, by
> + * re-attaching all RID/PASID of the device's back to the domains retained in
> + * the core-level structure.
> + *
> + * Caller must pair it with a successfully returned iommu_dev_reset_prepare().
> + *
> + * Note that, although unlikely, there is a risk that re-attaching domains might
> + * fail due to some unexpected happening like OOM.
> + */
> +void iommu_dev_reset_done(struct device *dev)
> +{
> + struct iommu_group *group = dev->iommu_group;
> + unsigned long pasid;
> + void *entry;
> +
> + if (!dev_has_iommu(dev))
> + return;
> +
> + guard(mutex)(&group->mutex);
> +
> + /* iommu_dev_reset_prepare() was bypassed for the device */
> + if (!group->resetting_domain)
> + return;
> +
> + /* iommu_dev_reset_prepare() was not successfully called */
> + if (WARN_ON(!group->blocking_domain))
> + return;
> +
> + /* Re-attach RID domain back to group->domain */
> + if (group->domain != group->blocking_domain) {
> + WARN_ON(__iommu_attach_device(group->domain, dev,
> + group->blocking_domain));
> + }
> +
> + /*
> + * Re-attach PASID domains back to the domains retained in pasid_array.
> + *
> + * The pasid_array is mostly fenced by group->mutex, except one reader
> + * in iommu_attach_handle_get(), so it's safe to read without xa_lock.
> + */
> + xa_for_each_start(&group->pasid_array, pasid, entry, 1)
> + WARN_ON(__iommu_set_group_pasid(
> + pasid_array_entry_to_domain(entry), group, pasid,
> + group->blocking_domain));
> +
> + group->resetting_domain = NULL;
> +}
> +EXPORT_SYMBOL_GPL(iommu_dev_reset_done);
> +
> #if IS_ENABLED(CONFIG_IRQ_MSI_IOMMU)
> /**
> * iommu_dma_prepare_msi() - Map the MSI page in the IOMMU domain
Thanks,
baolu
Powered by blists - more mailing lists