[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20220607164953.205306807@linuxfoundation.org>
Date: Tue, 7 Jun 2022 19:04:43 +0200
From: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
To: linux-kernel@...r.kernel.org
Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
stable@...r.kernel.org, Yunfei Wang <yf.wang@...iatek.com>,
Robin Murphy <robin.murphy@....com>,
Miles Chen <miles.chen@...iatek.com>,
Joerg Roedel <jroedel@...e.de>
Subject: [PATCH 5.15 618/667] iommu/dma: Fix iova map result check bug
From: Yunfei Wang <yf.wang@...iatek.com>
commit a3884774d731f03d3a3dd4fb70ec2d9341ceb39d upstream.
The data type of the return value of the iommu_map_sg_atomic
is ssize_t, but the data type of iova size is size_t,
e.g. one is int while the other is unsigned int.
When iommu_map_sg_atomic return value is compared with iova size,
it will force the signed int to be converted to unsigned int, if
iova map fails and iommu_map_sg_atomic return error code is less
than 0, then (ret < iova_len) is false, which will to cause not
do free iova, and the master can still successfully get the iova
of map fail, which is not expected.
Therefore, we need to check the return value of iommu_map_sg_atomic
in two cases according to whether it is less than 0.
Fixes: ad8f36e4b6b1 ("iommu: return full error code from iommu_map_sg[_atomic]()")
Signed-off-by: Yunfei Wang <yf.wang@...iatek.com>
Cc: <stable@...r.kernel.org> # 5.15.*
Reviewed-by: Robin Murphy <robin.murphy@....com>
Reviewed-by: Miles Chen <miles.chen@...iatek.com>
Link: https://lore.kernel.org/r/20220507085204.16914-1-yf.wang@mediatek.com
Signed-off-by: Joerg Roedel <jroedel@...e.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
---
drivers/iommu/dma-iommu.c | 7 ++++---
1 file changed, 4 insertions(+), 3 deletions(-)
--- a/drivers/iommu/dma-iommu.c
+++ b/drivers/iommu/dma-iommu.c
@@ -619,6 +619,7 @@ static struct page **__iommu_dma_alloc_n
unsigned int count, min_size, alloc_sizes = domain->pgsize_bitmap;
struct page **pages;
dma_addr_t iova;
+ ssize_t ret;
if (static_branch_unlikely(&iommu_deferred_attach_enabled) &&
iommu_deferred_attach(dev, domain))
@@ -656,8 +657,8 @@ static struct page **__iommu_dma_alloc_n
arch_dma_prep_coherent(sg_page(sg), sg->length);
}
- if (iommu_map_sg_atomic(domain, iova, sgt->sgl, sgt->orig_nents, ioprot)
- < size)
+ ret = iommu_map_sg_atomic(domain, iova, sgt->sgl, sgt->orig_nents, ioprot);
+ if (ret < 0 || ret < size)
goto out_free_sg;
sgt->sgl->dma_address = iova;
@@ -1054,7 +1055,7 @@ static int iommu_dma_map_sg(struct devic
* implementation - it knows better than we do.
*/
ret = iommu_map_sg_atomic(domain, iova, sg, nents, prot);
- if (ret < iova_len)
+ if (ret < 0 || ret < iova_len)
goto out_free_iova;
return __finalise_sg(dev, sg, nents, iova);
Powered by blists - more mailing lists