[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <4b6a864674a9231b3ac35e5ce0c7292f@codeaurora.org>
Date: Wed, 03 Jun 2020 13:07:08 +0530
From: guptap@...eaurora.org
To: Robin Murphy <robin.murphy@....com>
Cc: mhocko@...e.com, owner-linux-mm@...ck.org,
linux-kernel@...r.kernel.org, stable@...r.kernel.org,
linux-mm@...ck.org, iommu@...ts.linux-foundation.org,
Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: [PATCH] iommu/dma: limit iova free size to unmmaped iova
On 2020-06-02 18:37, Robin Murphy wrote:
> On 2020-05-26 08:19, guptap@...eaurora.org wrote:
>> On 2020-05-22 14:54, Robin Murphy wrote:
>>> On 2020-05-22 07:25, guptap@...eaurora.org wrote:
>>>> On 2020-05-22 01:46, Robin Murphy wrote:
>>>>> On 2020-05-21 12:30, Prakash Gupta wrote:
>> Sorry for delayed response, it was a long weekend.
>> I agree that invalid DMA API call can result in unexpected issues and
>> client
>> should fix it, but then the present behavior makes it difficult to
>> catch cases
>> when driver is making wrong DMA API calls. When invalid iova pfn is
>> passed it
>> doesn't fail then and there, though DMA layer is aware of iova being
>> invalid. It
>> fails much after that in the context of an valid map/unmap, with
>> BUG_ON().
>>
>> Downgrading BUG_ON() to WARN_ON() in iova_magazine_free_pfns() will
>> not help
>> much as invalid iova will cause NULL pointer dereference.
>
> Obviously I didn't mean a literal s/BUG/WARN/ substitution - some
> additional control flow to actually handle the error case was implied.
>
> I'll write up the patch myself, since it's easier than further
> debating.
>
I think it will address the issue I am facing as well. I will wait for
your patch.
>> I see no reason why DMA layer wants to free an iova for which unmapped
>> failed.
>> IMHO queuing an invalid iova (which already failed unmap) to rcache
>> which
>> eventually going to crash the system looks like iommu-dma layer issue.
>
> What if the unmap fails because the address range is already entirely
> unmapped? Freeing the IOVA (or at least attempting to) would be
> logically appropriate in that case. In fact some IOMMU drivers might
> not even consider that a failure, so the DMA layer may not even be
> aware that it's been handed a bogus unallocated address.
>
> The point is that unmapping *doesn't* fail under normal and correct
> operation, so the DMA layer should not expect to have to handle it.
> Even if it does happen, that's a highly exceptional case that the DMA
> layer cannot recover from by itself; at best it can just push the
> problem elsewhere. It's pretty hard to justify doing extra work to
> simply move an exceptional problem around without really addressing
> it.
>
iommu_unmap() expects that all areas within unmap size are mapped.
infact It
abandons further unmap if it find any chunk not mapped. So if an IOMMU
implementation is not returning failure for already unmapped area,
then it's prone to issue where DMA API reuse the IOVA, which is already
mapped.
In this case where iommu implementation returns error for already
unmapped area,
currently there is no way to distinguish an unmap failure due to range
already
unmapped or say while partially unmapping a section map, memory
allocation fails
and thus unmap returns failure.
In second case, we will free the iova even with mapping present. And
subsequent
dma mapping will keep on failing if it uses this freed iova. For managed
iova
both unmap/map should be done with dma APIs, it should be safe to expect
if a
range is unmapped with DMA APIs corresponding iova is also freed, so
there
shouldn't be a real need to free iova where unmap fails due to range
already
entirely unmapped.
> And in this particular case, personally I would *much* rather see
> warnings spewing from both the pagetable and IOVA code as early as
> possible to clearly indicate that the DMA layer itself has been thrown
> out of sync, than just have warnings that might represent some other
> source of pagetable corruption (or at worst, depending on the
> pagetable code, no warnings at all and only have dma_map_*() calls
> quietly start failing much, much later due to all the IOVA space
> having been leaked by bad unmaps).
>
I am not sure how useful is this freed iova if corresponding mappings
are not
unmapped. We won't be able to use those iova. The subsequent iommu_map
will fail
if using this freed iova. So it's important to ensure we only free iova
which is
unmapped successfully.
Thanks,
Prakash
Powered by blists - more mailing lists