lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20200314000007.13778-1-nicoleotsuka@gmail.com>
Date:   Fri, 13 Mar 2020 17:00:07 -0700
From:   Nicolin Chen <nicoleotsuka@...il.com>
To:     robin.murphy@....com, m.szyprowski@...sung.com, hch@....de
Cc:     linux-kernel@...r.kernel.org, iommu@...ts.linux-foundation.org
Subject: [RFC][PATCH] dma-mapping: align default segment_boundary_mask with dma_mask

More and more drivers set dma_masks above DMA_BIT_MAKS(32) while
only a handful of drivers call dma_set_seg_boundary(). This means
that most drivers have a 4GB segmention boundary because DMA API
returns DMA_BIT_MAKS(32) as a default value, though they might be
able to handle things above 32-bit.

This might result in a situation that iommu_map_sg() cuts an IOVA
region, larger than 4GB, into discontiguous pieces and creates a
faulty IOVA mapping that overlaps some physical memory being out
of the scatter list, which might lead to some random kernel panic
after DMA overwrites that faulty IOVA space.

We have CONFIG_DMA_API_DEBUG_SG in kernel/dma/debug.c that checks
such situations to prevent bad things from happening. However, it
is not a mandatory check. And one might not think of enabling it
when debugging a random kernel panic until figuring out that it's
related to iommu_map_sg().

A safer solution may be to align the default segmention boundary
with the configured dma_mask, so DMA API may create a contiguous
IOVA space as a device "expect" -- what tries to make sense is:
Though it's device driver's responsibility to set dma_parms, it
is not fair or even safe to apply a 4GB boundary here, which was
added a decade ago to work for up-to-4GB mappings at that time.

This patch updates the default segment_boundary_mask by aligning
it with dma_mask.

Signed-off-by: Nicolin Chen <nicoleotsuka@...il.com>
---
 include/linux/dma-mapping.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h
index 330ad58fbf4d..0df0ee92eba1 100644
--- a/include/linux/dma-mapping.h
+++ b/include/linux/dma-mapping.h
@@ -736,7 +736,7 @@ static inline unsigned long dma_get_seg_boundary(struct device *dev)
 {
 	if (dev->dma_parms && dev->dma_parms->segment_boundary_mask)
 		return dev->dma_parms->segment_boundary_mask;
-	return DMA_BIT_MASK(32);
+	return (unsigned long)dma_get_mask(dev);
 }
 
 static inline int dma_set_seg_boundary(struct device *dev, unsigned long mask)
-- 
2.17.1

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ