lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <04bf5f9a-a170-55bd-10f0-fa3695b85347@arm.com>
Date:   Thu, 25 Aug 2022 12:26:33 +0100
From:   Robin Murphy <robin.murphy@....com>
To:     Niklas Schnelle <schnelle@...ux.ibm.com>,
        Alexander Gordeev <agordeev@...ux.ibm.com>,
        Matthew Rosato <mjrosato@...ux.ibm.com>
Cc:     Pierre Morel <pmorel@...ux.ibm.com>, iommu@...ts.linux.dev,
        linux-s390@...r.kernel.org, borntraeger@...ux.ibm.com,
        hca@...ux.ibm.com, gor@...ux.ibm.com,
        gerald.schaefer@...ux.ibm.com, svens@...ux.ibm.com,
        joro@...tes.org, will@...nel.org, jgg@...dia.com,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH] iommu/s390: Fix race with release_device ops

On 2022-08-25 12:11, Niklas Schnelle wrote:
> On Thu, 2022-08-25 at 09:22 +0200, Alexander Gordeev wrote:
>> On Wed, Aug 24, 2022 at 04:25:19PM -0400, Matthew Rosato wrote:
>>>>> @@ -90,15 +90,39 @@ static int s390_iommu_attach_device(struct iommu_domain *domain,
>>>>>        struct zpci_dev *zdev = to_zpci_dev(dev);
>>>>>        struct s390_domain_device *domain_device;
>>>>>        unsigned long flags;
>>>>> -    int cc, rc;
>>>>> +    int cc, rc = 0;
>>>>>          if (!zdev)
>>>>>            return -ENODEV;
>>>>>    +    /* First check compatibility */
>>>>> +    spin_lock_irqsave(&s390_domain->list_lock, flags);
>>>>> +    /* First device defines the DMA range limits */
>>>>> +    if (list_empty(&s390_domain->devices)) {
>>>>> +        domain->geometry.aperture_start = zdev->start_dma;
>>>>> +        domain->geometry.aperture_end = zdev->end_dma;
>>>>> +        domain->geometry.force_aperture = true;
>>>>> +    /* Allow only devices with identical DMA range limits */
>>>>> +    } else if (domain->geometry.aperture_start != zdev->start_dma ||
>>>>> +           domain->geometry.aperture_end != zdev->end_dma) {
>>>>> +        rc = -EINVAL;
>>>>> +    }
>>>>> +    spin_unlock_irqrestore(&s390_domain->list_lock, flags);
>>>>> +    if (rc)
>>>>> +        return rc;
>>>>> +
>>>>>        domain_device = kzalloc(sizeof(*domain_device), GFP_KERNEL);
>>>>>        if (!domain_device)
>>>>>            return -ENOMEM;
>>>>>    +    /* Leave now if the device has already been released */
>>>>> +    spin_lock_irqsave(&zdev->dma_domain_lock, flags);
>>>>> +    if (!dev_iommu_priv_get(dev)) {
>>>>> +        spin_unlock_irqrestore(&zdev->dma_domain_lock, flags);
>>>>> +        kfree(domain_device);
>>>>> +        return 0;
>>>>> +    }
>>>>> +
>>>>>        if (zdev->dma_table && !zdev->s390_domain) {
>>>>>            cc = zpci_dma_exit_device(zdev);
>>>>>            if (cc) {
>>>>
>>>> Am I wrong? It seems to me that zpci_dma_exit_device here is called with the spin_lock locked but this function zpci_dma_exit_device calls vfree which may sleep.
>>>>
>>>
>>> Oh, good point, I just enabled lockdep to verify that.
>>>
>>> I think we could just replace this with a mutex instead, it's not a performance path.  I've been running tests successfully today with this patch modified to instead use a mutex for dma_domain_lock.
>>
>> But your original version uses irq-savvy spinlocks.
>> Are there data that need to be protected against interrupts?
>>
>> Thanks!
> 
> I think that was a carry over from my original attempt that used the
> zdev->dma_domain_lock in some more places including in interrupt
> context. I think these are gone now so I think Matt is right in his
> version this can be a mutex.

Yes, probe/release/attach/detach should absolutely not be happening from 
atomic/IRQ context. At the very least, the IOMMU core itself needs to 
take the group mutex in those paths.

Cheers,
Robin.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ