[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200821113355.6140-3-song.bao.hua@hisilicon.com>
Date: Fri, 21 Aug 2020 23:33:54 +1200
From: Barry Song <song.bao.hua@...ilicon.com>
To: <hch@....de>, <m.szyprowski@...sung.com>, <robin.murphy@....com>,
<will@...nel.org>, <ganapatrao.kulkarni@...ium.com>,
<catalin.marinas@....com>, <akpm@...ux-foundation.org>
CC: <iommu@...ts.linux-foundation.org>,
<linux-arm-kernel@...ts.infradead.org>,
<linux-kernel@...r.kernel.org>, <prime.zeng@...ilicon.com>,
<huangdaode@...wei.com>, <linuxarm@...wei.com>,
Barry Song <song.bao.hua@...ilicon.com>,
Nicolas Saenz Julienne <nsaenzjulienne@...e.de>,
Steve Capper <steve.capper@....com>,
Mike Rapoport <rppt@...ux.ibm.com>
Subject: [PATCH v7 2/3] arm64: mm: reserve per-numa CMA to localize coherent dma buffers
Right now, smmu is using dma_alloc_coherent() to get memory to save queues
and tables. Typically, on ARM64 server, there is a default CMA located at
node0, which could be far away from node2, node3 etc.
with this patch, smmu will get memory from local numa node to save command
queues and page tables. that means dma_unmap latency will be shrunk much.
Meanwhile, when iommu.passthrough is on, device drivers which call dma_
alloc_coherent() will also get local memory and avoid the travel between
numa nodes.
Acked-by: Will Deacon <will@...nel.org>
Cc: Christoph Hellwig <hch@....de>
Cc: Marek Szyprowski <m.szyprowski@...sung.com>
Cc: Robin Murphy <robin.murphy@....com>
Cc: Ganapatrao Kulkarni <ganapatrao.kulkarni@...ium.com>
Cc: Catalin Marinas <catalin.marinas@....com>
Cc: Nicolas Saenz Julienne <nsaenzjulienne@...e.de>
Cc: Steve Capper <steve.capper@....com>
Cc: Andrew Morton <akpm@...ux-foundation.org>
Cc: Mike Rapoport <rppt@...ux.ibm.com>
Signed-off-by: Barry Song <song.bao.hua@...ilicon.com>
---
-v7: add Will's acked-by
arch/arm64/mm/init.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
index 481d22c32a2e..f1c75957ff3c 100644
--- a/arch/arm64/mm/init.c
+++ b/arch/arm64/mm/init.c
@@ -429,6 +429,8 @@ void __init bootmem_init(void)
arm64_hugetlb_cma_reserve();
#endif
+ dma_pernuma_cma_reserve();
+
/*
* sparse_init() tries to allocate memory from memblock, so must be
* done after the fixed reservations
--
2.27.0
Powered by blists - more mailing lists