[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <9b5f8501-6e6e-0cd2-7f98-7cfea13051d7@arm.com>
Date: Tue, 2 Jun 2020 14:07:12 +0100
From: Robin Murphy <robin.murphy@....com>
To: guptap@...eaurora.org
Cc: mhocko@...e.com, owner-linux-mm@...ck.org,
linux-kernel@...r.kernel.org, stable@...r.kernel.org,
linux-mm@...ck.org, iommu@...ts.linux-foundation.org,
Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: [PATCH] iommu/dma: limit iova free size to unmmaped iova
On 2020-05-26 08:19, guptap@...eaurora.org wrote:
> On 2020-05-22 14:54, Robin Murphy wrote:
>> On 2020-05-22 07:25, guptap@...eaurora.org wrote:
>>> On 2020-05-22 01:46, Robin Murphy wrote:
>>>> On 2020-05-21 12:30, Prakash Gupta wrote:
>>> I agree, we shouldn't be freeing the partial iova. Instead just making
>>> sure if unmap was successful should be sufficient before freeing
>>> iova. So change
>>> can instead be something like this:
>>>
>>> - iommu_dma_free_iova(cookie, dma_addr, size);
>>> + if (unmapped)
>>> + iommu_dma_free_iova(cookie, dma_addr, size);
>>>
>>>> TBH my gut feeling here is that you're really just trying to treat a
>>>> symptom of another bug elsewhere, namely some driver calling
>>>> dma_unmap_* or dma_free_* with the wrong address or size in the first
>>>> place.
>>>>
>>> This condition would arise only if driver calling dma_unmap/free_*
>>> with 0
>>> iova_pfn. This will be flagged with a warning during unmap but will
>>> trigger
>>> panic later on while doing unrelated dma_map/unmap_*. If unmapped has
>>> already
>>> failed for invalid iova, there is no reason we should consider this
>>> as valid
>>> iova and free. This part should be fixed.
>>
>> I disagree. In general, if drivers call the DMA API incorrectly it is
>> liable to lead to data loss, memory corruption, and various other
>> unpleasant misbehaviour - it is not the DMA layer's job to attempt to
>> paper over driver bugs.
>>
>> There *is* an argument for downgrading the BUG_ON() in
>> iova_magazine_free_pfns() to a WARN_ON(), since frankly it isn't a
>> sufficiently serious condition to justify killing the whole machine
>> immediately, but NAK to bodging the iommu-dma mid-layer to "fix" that.
>> A serious bug already happened elsewhere, so trying to hide the
>> fallout really doesn't help anyone.
>>
> Sorry for delayed response, it was a long weekend.
> I agree that invalid DMA API call can result in unexpected issues and
> client
> should fix it, but then the present behavior makes it difficult to catch
> cases
> when driver is making wrong DMA API calls. When invalid iova pfn is
> passed it
> doesn't fail then and there, though DMA layer is aware of iova being
> invalid. It
> fails much after that in the context of an valid map/unmap, with BUG_ON().
>
> Downgrading BUG_ON() to WARN_ON() in iova_magazine_free_pfns() will not
> help
> much as invalid iova will cause NULL pointer dereference.
Obviously I didn't mean a literal s/BUG/WARN/ substitution - some
additional control flow to actually handle the error case was implied.
I'll write up the patch myself, since it's easier than further debating.
> I see no reason why DMA layer wants to free an iova for which unmapped
> failed.
> IMHO queuing an invalid iova (which already failed unmap) to rcache which
> eventually going to crash the system looks like iommu-dma layer issue.
What if the unmap fails because the address range is already entirely
unmapped? Freeing the IOVA (or at least attempting to) would be
logically appropriate in that case. In fact some IOMMU drivers might not
even consider that a failure, so the DMA layer may not even be aware
that it's been handed a bogus unallocated address.
The point is that unmapping *doesn't* fail under normal and correct
operation, so the DMA layer should not expect to have to handle it. Even
if it does happen, that's a highly exceptional case that the DMA layer
cannot recover from by itself; at best it can just push the problem
elsewhere. It's pretty hard to justify doing extra work to simply move
an exceptional problem around without really addressing it.
And in this particular case, personally I would *much* rather see
warnings spewing from both the pagetable and IOVA code as early as
possible to clearly indicate that the DMA layer itself has been thrown
out of sync, than just have warnings that might represent some other
source of pagetable corruption (or at worst, depending on the pagetable
code, no warnings at all and only have dma_map_*() calls quietly start
failing much, much later due to all the IOVA space having been leaked by
bad unmaps).
Robin.
Powered by blists - more mailing lists