[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20200710141016.15495-1-nsaenzjulienne@suse.de>
Date: Fri, 10 Jul 2020 16:10:15 +0200
From: Nicolas Saenz Julienne <nsaenzjulienne@...e.de>
To: hch@....de, Marek Szyprowski <m.szyprowski@...sung.com>,
Robin Murphy <robin.murphy@....com>,
David Rientjes <rientjes@...gle.com>
Cc: linux-rpi-kernel@...ts.infradead.org, jeremy.linton@....com,
Nicolas Saenz Julienne <nsaenzjulienne@...e.de>,
iommu@...ts.linux-foundation.org, linux-kernel@...r.kernel.org
Subject: [PATCH] dma-pool: Only allocate from CMA when in same memory zone
There is no guarantee to CMA's placement, so allocating a zone specific
atomic pool from CMA might return memory from a completely different
memory zone. To get around this double check CMA's placement before
allocating from it.
Fixes: c84dc6e68a1d ("dma-pool: add additional coherent pools to map to gfp mask")
Reported-by: Jeremy Linton <jeremy.linton@....com>
Signed-off-by: Nicolas Saenz Julienne <nsaenzjulienne@...e.de>
---
This is a code intensive alternative to "dma-pool: Do not allocate pool
memory from CMA"[1].
[1] https://lkml.org/lkml/2020/7/8/1108
kernel/dma/pool.c | 36 +++++++++++++++++++++++++++++++++++-
1 file changed, 35 insertions(+), 1 deletion(-)
diff --git a/kernel/dma/pool.c b/kernel/dma/pool.c
index 8cfa01243ed2..ccf3eeb77e00 100644
--- a/kernel/dma/pool.c
+++ b/kernel/dma/pool.c
@@ -3,6 +3,7 @@
* Copyright (C) 2012 ARM Ltd.
* Copyright (C) 2020 Google LLC
*/
+#include <linux/cma.h>
#include <linux/debugfs.h>
#include <linux/dma-direct.h>
#include <linux/dma-noncoherent.h>
@@ -56,6 +57,39 @@ static void dma_atomic_pool_size_add(gfp_t gfp, size_t size)
pool_size_kernel += size;
}
+static bool cma_in_zone(gfp_t gfp)
+{
+ u64 zone_dma_end, zone_dma32_end;
+ phys_addr_t base, end;
+ unsigned long size;
+ struct cma *cma;
+
+ cma = dev_get_cma_area(NULL);
+ if (!cma)
+ return false;
+
+ size = cma_get_size(cma);
+ if (!size)
+ return false;
+ base = cma_get_base(cma) - memblock_start_of_DRAM();
+ end = base + size - 1;
+
+ zone_dma_end = IS_ENABLED(CONFIG_ZONE_DMA) ? DMA_BIT_MASK(zone_dma_bits) : 0;
+ zone_dma32_end = IS_ENABLED(CONFIG_ZONE_DMA32) ? DMA_BIT_MASK(32) : 0;
+
+ /* CMA can't cross zone boundaries, see cma_activate_area() */
+ if (IS_ENABLED(CONFIG_ZONE_DMA) && gfp & GFP_DMA &&
+ end <= zone_dma_end)
+ return true;
+ else if (IS_ENABLED(CONFIG_ZONE_DMA32) && gfp & GFP_DMA32 &&
+ base > zone_dma_end && end <= zone_dma32_end)
+ return true;
+ else if (base > zone_dma32_end)
+ return true;
+
+ return false;
+}
+
static int atomic_pool_expand(struct gen_pool *pool, size_t pool_size,
gfp_t gfp)
{
@@ -70,7 +104,7 @@ static int atomic_pool_expand(struct gen_pool *pool, size_t pool_size,
do {
pool_size = 1 << (PAGE_SHIFT + order);
- if (dev_get_cma_area(NULL))
+ if (cma_in_zone(gfp))
page = dma_alloc_from_contiguous(NULL, 1 << order,
order, false);
else
--
2.27.0
Powered by blists - more mailing lists