[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Ywyc/WxFg3zawXL5@osiris>
Date: Mon, 29 Aug 2022 13:03:25 +0200
From: Heiko Carstens <hca@...ux.ibm.com>
To: Matthew Rosato <mjrosato@...ux.ibm.com>
Cc: iommu@...ts.linux.dev, linux-s390@...r.kernel.org,
schnelle@...ux.ibm.com, pmorel@...ux.ibm.com,
borntraeger@...ux.ibm.com, gor@...ux.ibm.com,
gerald.schaefer@...ux.ibm.com, agordeev@...ux.ibm.com,
svens@...ux.ibm.com, joro@...tes.org, will@...nel.org,
robin.murphy@....com, jgg@...dia.com, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2] iommu/s390: Fix race with release_device ops
On Fri, Aug 26, 2022 at 03:47:21PM -0400, Matthew Rosato wrote:
> With commit fa7e9ecc5e1c ("iommu/s390: Tolerate repeat attach_dev
> calls") s390-iommu is supposed to handle dynamic switching between IOMMU
> domains and the DMA API handling. However, this commit does not
> sufficiently handle the case where the device is released via a call
> to the release_device op as it may occur at the same time as an opposing
> attach_dev or detach_dev since the group mutex is not held over
> release_device. This was observed if the device is deconfigured during a
> small window during vfio-pci initialization and can result in WARNs and
> potential kernel panics.
>
> Handle this by tracking when the device is probed/released via
> dev_iommu_priv_set/get(). Ensure that once the device is released only
> release_device handles the re-init of the device DMA.
>
> Fixes: fa7e9ecc5e1c ("iommu/s390: Tolerate repeat attach_dev calls")
> Signed-off-by: Matthew Rosato <mjrosato@...ux.ibm.com>
...
> + /* First check compatibility */
> + spin_lock_irqsave(&s390_domain->list_lock, flags);
> + /* First device defines the DMA range limits */
> + if (list_empty(&s390_domain->devices)) {
> + domain->geometry.aperture_start = zdev->start_dma;
> + domain->geometry.aperture_end = zdev->end_dma;
> + domain->geometry.force_aperture = true;
> + /* Allow only devices with identical DMA range limits */
> + } else if (domain->geometry.aperture_start != zdev->start_dma ||
> + domain->geometry.aperture_end != zdev->end_dma) {
> + rc = -EINVAL;
> + }
> + spin_unlock_irqrestore(&s390_domain->list_lock, flags);
...
> spin_lock_irqsave(&s390_domain->list_lock, flags);
> - /* First device defines the DMA range limits */
> - if (list_empty(&s390_domain->devices)) {
> - domain->geometry.aperture_start = zdev->start_dma;
> - domain->geometry.aperture_end = zdev->end_dma;
> - domain->geometry.force_aperture = true;
> - /* Allow only devices with identical DMA range limits */
> - } else if (domain->geometry.aperture_start != zdev->start_dma ||
> - domain->geometry.aperture_end != zdev->end_dma) {
> - rc = -EINVAL;
> - spin_unlock_irqrestore(&s390_domain->list_lock, flags);
> - goto out_restore;
> - }
> domain_device->zdev = zdev;
> - zdev->s390_domain = s390_domain;
> list_add(&domain_device->list, &s390_domain->devices);
> spin_unlock_irqrestore(&s390_domain->list_lock, flags);
Stupid question: but how is this not racy when the spinlock is
released between doing something that depends on an empty list and
actually adding to the list later, after the lock had been released?
> + mutex_lock(&zdev->dma_domain_lock);
> + dev_iommu_priv_set(dev, NULL);
> + mutex_unlock(&zdev->dma_domain_lock);
> + /* Make sure this device is removed from the domain list */
> domain = iommu_get_domain_for_dev(dev);
> if (domain)
> s390_iommu_detach_device(domain, dev);
> + /* Now ensure DMA is initialized from here */
> + mutex_lock(&zdev->dma_domain_lock);
> + if (zdev->s390_domain) {
> + zdev->s390_domain = NULL;
> + zpci_unregister_ioat(zdev, 0);
> + zpci_dma_init_device(zdev);
> + }
> + mutex_unlock(&zdev->dma_domain_lock);
Looking at the patch and this code it is also anything but obvious
which _data_ is actually protected by the mutex. Anyway.. just some
stupid comments while briefly looking at the patch :)
Powered by blists - more mailing lists