lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20190416182347.18441-9-hch@lst.de>
Date:   Tue, 16 Apr 2019 20:23:46 +0200
From:   Christoph Hellwig <hch@....de>
To:     "David S. Miller" <davem@...emloft.net>
Cc:     Guenter Roeck <linux@...ck-us.net>, sparclinux@...r.kernel.org,
        linux-kernel@...r.kernel.org
Subject: [PATCH 8/9] sparc/iommu: use __sbus_iommu_map_page to implement the map_sg path

This means we handle > PAGE_SIZE offsets fine, and grow the size check
so far only performed in the map_page path.  We lose the optimization
to not double flush a page if it apears in multiple consecutive SG list
entries.  But at least for block I/O those don't happen anymore since
we properly merge in higher layers anyway.

Signed-off-by: Christoph Hellwig <hch@....de>
Reported-by: Guenter Roeck <linux@...ck-us.net>
---
 arch/sparc/mm/iommu.c | 31 ++++++++++---------------------
 1 file changed, 10 insertions(+), 21 deletions(-)

diff --git a/arch/sparc/mm/iommu.c b/arch/sparc/mm/iommu.c
index 37b5ce7657f6..8fbc08d14836 100644
--- a/arch/sparc/mm/iommu.c
+++ b/arch/sparc/mm/iommu.c
@@ -217,6 +217,11 @@ static dma_addr_t __sbus_iommu_map_page(struct device *dev, struct page *page,
 	if (!len || len > 256 * 1024)
 		return DMA_MAPPING_ERROR;
 
+	/*
+	 * We expect unmapped highmem pages to be not in the cache.
+	 * XXX Is this a good assumption?
+	 * XXX What if someone else unmaps it here and races us?
+	 */
 	if (per_page_flush && !PageHighMem(page)) {
 		unsigned long vaddr, p;
 
@@ -247,30 +252,14 @@ static int __sbus_iommu_map_sg(struct device *dev, struct scatterlist *sgl,
 		int nents, enum dma_data_direction dir, unsigned long attrs,
 		bool per_page_flush)
 {
-	unsigned long page, oldpage = 0;
 	struct scatterlist *sg;
-	int i, j, n;
+	int j;
 
 	for_each_sg(sgl, sg, nents, j) {
-		n = (sg->length + sg->offset + PAGE_SIZE-1) >> PAGE_SHIFT;
-
-		/*
-		 * We expect unmapped highmem pages to be not in the cache.
-		 * XXX Is this a good assumption?
-		 * XXX What if someone else unmaps it here and races us?
-		 */
-		if (per_page_flush && !PageHighMem(sg_page(sg))) {
-			page = (unsigned long)page_address(sg_page(sg));
-			for (i = 0; i < n; i++) {
-				if (page != oldpage) {	/* Already flushed? */
-					flush_page_for_dma(page);
-					oldpage = page;
-				}
-				page += PAGE_SIZE;
-			}
-		}
-
-		sg->dma_address = iommu_get_one(dev, sg_phys(sg), n) + sg->offset;
+		sg->dma_address =__sbus_iommu_map_page(dev, sg_page(sg),
+				sg->offset, sg->length, per_page_flush);
+		if (sg->dma_address == DMA_MAPPING_ERROR)
+			return 0;
 		sg->dma_length = sg->length;
 	}
 
-- 
2.20.1

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ