lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1512114786-5085-4-git-send-email-iamjoonsoo.kim@lge.com>
Date:   Fri,  1 Dec 2017 16:53:06 +0900
From:   js1304@...il.com
To:     Andrew Morton <akpm@...ux-foundation.org>
Cc:     Rik van Riel <riel@...hat.com>,
        Johannes Weiner <hannes@...xchg.org>,
        Laura Abbott <lauraa@...eaurora.org>,
        Minchan Kim <minchan@...nel.org>,
        Marek Szyprowski <m.szyprowski@...sung.com>,
        Michal Nazarewicz <mina86@...a86.com>,
        "Aneesh Kumar K . V" <aneesh.kumar@...ux.vnet.ibm.com>,
        Vlastimil Babka <vbabka@...e.cz>,
        Russell King <linux@...linux.org.uk>,
        Will Deacon <will.deacon@....com>, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org, kernel-team@....com,
        Joonsoo Kim <iamjoonsoo.kim@....com>,
        Tony Lindgren <tony@...mide.com>
Subject: [PATCH v2 3/3] ARM: CMA: avoid double mapping to the CMA area if CONFIG_HIGHMEM = y

From: Joonsoo Kim <iamjoonsoo.kim@....com>

CMA area is now managed by the separate zone, ZONE_MOVABLE,
to fix many MM related problems. In this implementation, if
CONFIG_HIGHMEM = y, then ZONE_MOVABLE is considered as HIGHMEM and
the memory of the CMA area is also considered as HIGHMEM.
That means that they are considered as the page without direct mapping.
However, CMA area could be in a lowmem and the memory could have
direct mapping.

In ARM, when establishing a new mapping for DMA, direct mapping should
be cleared since two mapping with different cache policy could cause
unknown problem. With this patch, PageHighmem() for the CMA memory
located in lowmem returns true so that the function for DMA mapping
cannot notice whether it needs to clear direct mapping or not, correctly.
To handle this situation, this patch always clears direct mapping
for such CMA memory.

Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@....com>
---
 arch/arm/mm/dma-mapping.c | 16 +++++++++++++++-
 1 file changed, 15 insertions(+), 1 deletion(-)

diff --git a/arch/arm/mm/dma-mapping.c b/arch/arm/mm/dma-mapping.c
index ada8eb2..8c398fe 100644
--- a/arch/arm/mm/dma-mapping.c
+++ b/arch/arm/mm/dma-mapping.c
@@ -466,6 +466,12 @@ void __init dma_contiguous_early_fixup(phys_addr_t base, unsigned long size)
 void __init dma_contiguous_remap(void)
 {
 	int i;
+
+	if (!dma_mmu_remap_num)
+		return;
+
+	/* call flush_cache_all() since CMA area would be large enough */
+	flush_cache_all();
 	for (i = 0; i < dma_mmu_remap_num; i++) {
 		phys_addr_t start = dma_mmu_remap[i].base;
 		phys_addr_t end = start + dma_mmu_remap[i].size;
@@ -498,7 +504,15 @@ void __init dma_contiguous_remap(void)
 		flush_tlb_kernel_range(__phys_to_virt(start),
 				       __phys_to_virt(end));
 
-		iotable_init(&map, 1);
+		/*
+		 * All the memory in CMA region will be on ZONE_MOVABLE.
+		 * If that zone is considered as highmem, the memory in CMA
+		 * region is also considered as highmem even if it's
+		 * physical address belong to lowmem. In this case,
+		 * re-mapping isn't required.
+		 */
+		if (!is_highmem_idx(ZONE_MOVABLE))
+			iotable_init(&map, 1);
 	}
 }
 
-- 
2.7.4

Powered by blists - more mailing lists