lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <1479305975-21670-1-git-send-email-jason.hui.liu@nxp.com>
Date:   Wed, 16 Nov 2016 22:19:35 +0800
From:   Jason Liu <jason.hui.liu@....com>
To:     <linux-arm-kernel@...ts.infradead.org>
CC:     <linux-kernel@...r.kernel.org>, <m.szyprowski@...sung.com>,
        <iamjoonsoo.kim@....com>, <gregkh@...uxfoundation.org>
Subject: [PATCH 1/1] drivers: dma-contiguous: Ensure cma reserve region never cross the low/high mem boundary

If the cma reserve region goes through the device-tree method,
also need ensure the cma reserved region not cross the low/high
mem boundary. This patch did the similar fix as commit:16195dd
("mm: cma: Ensure that reservations never cross the low/high mem boundary")

Signed-off-by: Jason Liu <jason.hui.liu@....com>
Cc: Marek Szyprowski <m.szyprowski@...sung.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@....com>
Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
---
 drivers/base/dma-contiguous.c | 27 +++++++++++++++++++++++++++
 1 file changed, 27 insertions(+)

diff --git a/drivers/base/dma-contiguous.c b/drivers/base/dma-contiguous.c
index e167a1e1..2bc093c 100644
--- a/drivers/base/dma-contiguous.c
+++ b/drivers/base/dma-contiguous.c
@@ -244,6 +244,7 @@ static int __init rmem_cma_setup(struct reserved_mem *rmem)
 {
 	phys_addr_t align = PAGE_SIZE << max(MAX_ORDER - 1, pageblock_order);
 	phys_addr_t mask = align - 1;
+	phys_addr_t highmem_start;
 	unsigned long node = rmem->fdt_node;
 	struct cma *cma;
 	int err;
@@ -256,6 +257,32 @@ static int __init rmem_cma_setup(struct reserved_mem *rmem)
 		pr_err("Reserved memory: incorrect alignment of CMA region\n");
 		return -EINVAL;
 	}
+#ifdef CONFIG_X86
+	/*
+	 * high_memory isn't direct mapped memory so retrieving its physical
+	 * address isn't appropriate.  But it would be useful to check the
+	 * physical address of the highmem boundary so it's justfiable to get
+	 * the physical address from it.  On x86 there is a validation check for
+	 * this case, so the following workaround is needed to avoid it.
+	 */
+	highmem_start = __pa_nodebug(high_memory);
+#else
+	highmem_start = __pa(high_memory);
+#endif
+
+	/*
+	 * All pages in the reserved area must come from the same zone.
+	 * If the reserved region crosses the low/high memory boundary,
+	 * try to fix it up and then fall back to allocate from the low mem
+	 */
+	if (rmem->base < highmem_start &&
+		(rmem->base + rmem->size) > highmem_start) {
+		memblock_free(rmem->base, rmem->size);
+		rmem->base = memblock_alloc_range(rmem->size, align, 0,
+						highmem_start, MEMBLOCK_NONE);
+		if (!rmem->base)
+			return -ENOMEM;
+	}
 
 	err = cma_init_reserved_mem(rmem->base, rmem->size, 0, &cma);
 	if (err) {
-- 
1.9.1

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ