[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240527232625.462045-1-andriy.shevchenko@linux.intel.com>
Date: Tue, 28 May 2024 02:26:25 +0300
From: Andy Shevchenko <andriy.shevchenko@...ux.intel.com>
To: iommu@...ts.linux.dev,
linux-kernel@...r.kernel.org
Cc: Robin Murphy <robin.murphy@....com>,
Joerg Roedel <joro@...tes.org>,
Will Deacon <will@...nel.org>,
Andy Shevchenko <andriy.shevchenko@...ux.intel.com>,
NĂcolas F . R . A . Prado <nfraprado@...labora.com>
Subject: [PATCH v1 1/1] iommu/dma: Make SG mapping and syncing robust against empty tables
DMA mapping and syncing API might be called for the empty SG table where
number of the original entries is 0 and a pointer to SG list may be not
initialised at all. This all worked until the change to the code that
started dereferensing SG list without checking the number of the
original entries against 0. This might lead to the NULL pointer
dereference if the caller won't perform a preliminary check for that.
Statistically there are only a few cases in the kernel that do such a
check. However, any attempt to make it alinged with the rest 99%+ cases
will be a regression due to above mentioned relatively recent change.
Instead of asking a caller to perform the checks, just return status quo
to SG mapping and syncing callbacks, so they won't crash on
uninitialised SG list.
Reported-by: NĂcolas F. R. A. Prado <nfraprado@...labora.com>
Closes: https://lore.kernel.org/all/d3679496-2e4e-4a7c-97ed-f193bd53af1d@notapiano
Fixes: 861370f49ce4 ("iommu/dma: force bouncing if the size is not cacheline-aligned")
Fixes: 8cc3bad9d9d6 ("spi: Remove unneded check for orig_nents")
Signed-off-by: Andy Shevchenko <andriy.shevchenko@...ux.intel.com>
---
drivers/iommu/dma-iommu.c | 9 +++++++++
1 file changed, 9 insertions(+)
diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
index f731e4b2a417..83c9013aa341 100644
--- a/drivers/iommu/dma-iommu.c
+++ b/drivers/iommu/dma-iommu.c
@@ -1108,6 +1108,9 @@ static void iommu_dma_sync_sg_for_cpu(struct device *dev,
struct scatterlist *sg;
int i;
+ if (nelems < 1)
+ return;
+
if (sg_dma_is_swiotlb(sgl))
for_each_sg(sgl, sg, nelems, i)
iommu_dma_sync_single_for_cpu(dev, sg_dma_address(sg),
@@ -1124,6 +1127,9 @@ static void iommu_dma_sync_sg_for_device(struct device *dev,
struct scatterlist *sg;
int i;
+ if (nelems < 1)
+ return;
+
if (sg_dma_is_swiotlb(sgl))
for_each_sg(sgl, sg, nelems, i)
iommu_dma_sync_single_for_device(dev,
@@ -1324,6 +1330,9 @@ static int iommu_dma_map_sg_swiotlb(struct device *dev, struct scatterlist *sg,
struct scatterlist *s;
int i;
+ if (nents < 1)
+ return nents;
+
sg_dma_mark_swiotlb(sg);
for_each_sg(sg, s, nents, i) {
--
2.43.0.rc1.1336.g36b5255a03ac
Powered by blists - more mailing lists