lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 02 Jun 2020 15:09:29 +0530
From:   guptap@...eaurora.org
To:     Robin Murphy <robin.murphy@....com>
Cc:     Andrew Morton <akpm@...ux-foundation.org>, mhocko@...e.com,
        joro@...tes.org, linux-mm@...ck.org,
        iommu@...ts.linux-foundation.org, linux-kernel@...r.kernel.org,
        owner-linux-mm@...ck.org, stable@...r.kernel.org
Subject: Re: [PATCH] iommu/dma: limit iova free size to unmmaped iova

On 2020-05-26 12:49, guptap@...eaurora.org wrote:
> On 2020-05-22 14:54, Robin Murphy wrote:
>> On 2020-05-22 07:25, guptap@...eaurora.org wrote:
>>> On 2020-05-22 01:46, Robin Murphy wrote:
>>>> On 2020-05-21 12:30, Prakash Gupta wrote:
>>> I agree, we shouldn't be freeing the partial iova. Instead just 
>>> making
>>> sure if unmap was successful should be sufficient before freeing 
>>> iova. So change
>>> can instead be something like this:
>>> 
>>> -    iommu_dma_free_iova(cookie, dma_addr, size);
>>> +    if (unmapped)
>>> +        iommu_dma_free_iova(cookie, dma_addr, size);
>>> 
>>>> TBH my gut feeling here is that you're really just trying to treat a
>>>> symptom of another bug elsewhere, namely some driver calling
>>>> dma_unmap_* or dma_free_* with the wrong address or size in the 
>>>> first
>>>> place.
>>>> 
>>> This condition would arise only if driver calling dma_unmap/free_* 
>>> with 0
>>> iova_pfn. This will be flagged with a warning during unmap but will 
>>> trigger
>>> panic later on while doing unrelated dma_map/unmap_*. If unmapped has 
>>> already
>>> failed for invalid iova, there is no reason we should consider this 
>>> as valid
>>> iova and free. This part should be fixed.
>> 
>> I disagree. In general, if drivers call the DMA API incorrectly it is
>> liable to lead to data loss, memory corruption, and various other
>> unpleasant misbehaviour - it is not the DMA layer's job to attempt to
>> paper over driver bugs.
>> 
>> There *is* an argument for downgrading the BUG_ON() in
>> iova_magazine_free_pfns() to a WARN_ON(), since frankly it isn't a
>> sufficiently serious condition to justify killing the whole machine
>> immediately, but NAK to bodging the iommu-dma mid-layer to "fix" that.
>> A serious bug already happened elsewhere, so trying to hide the
>> fallout really doesn't help anyone.
>> 
> Sorry for delayed response, it was a long weekend.
> I agree that invalid DMA API call can result in unexpected issues and 
> client
> should fix it, but then the present behavior makes it difficult to 
> catch cases
> when driver is making wrong DMA API calls. When invalid iova pfn is 
> passed it
> doesn't fail then and there, though DMA layer is aware of iova being 
> invalid. It
> fails much after that in the context of an valid map/unmap, with 
> BUG_ON().
> 
> Downgrading BUG_ON() to WARN_ON() in iova_magazine_free_pfns() will not 
> help
> much as invalid iova will cause NULL pointer dereference.
> 
> I see no reason why DMA layer wants to free an iova for which unmapped 
> failed.
> IMHO queuing an invalid iova (which already failed unmap) to rcache 
> which
> eventually going to crash the system looks like iommu-dma layer issue.
> 
> Thanks,
> Prakash

ping?

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ