lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 3 Jun 2020 14:42:31 +1200
From:   Barry Song <song.bao.hua@...ilicon.com>
To:     <hch@....de>, <m.szyprowski@...sung.com>, <robin.murphy@....com>,
        <catalin.marinas@....com>
CC:     <iommu@...ts.linux-foundation.org>,
        <linux-arm-kernel@...ts.infradead.org>,
        <linux-kernel@...r.kernel.org>, <linuxarm@...wei.com>,
        <Jonathan.Cameron@...wei.com>, <john.garry@...wei.com>,
        <prime.zeng@...ilicon.com>,
        Barry Song <song.bao.hua@...ilicon.com>,
        Will Deacon <will@...nel.org>
Subject: [PATCH 3/3] arm64: mm: reserve per-numa CMA after numa_init

Right now, smmu is using dma_alloc_coherent() to get memory to save queues
and tables. Typically, on ARM64 server, there is a default CMA located at
node0, which could be far away from node2, node3 etc.
with this patch, smmu will get memory from local numa node to save command
queues and page tables. that means dma_unmap latency will be shrunk much.
Meanwhile, when iommu.passthrough is on, device drivers which call dma_
alloc_coherent() will also get local memory and avoid the travel between
numa nodes.

Cc: Will Deacon <will@...nel.org>
Cc: Robin Murphy <robin.murphy@....com>
Signed-off-by: Barry Song <song.bao.hua@...ilicon.com>
---
 arch/arm64/mm/init.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
index 8f0e70ebb49d..204a534982b2 100644
--- a/arch/arm64/mm/init.c
+++ b/arch/arm64/mm/init.c
@@ -474,6 +474,8 @@ void __init bootmem_init(void)
 
 	arm64_numa_init();
 
+	dma_pernuma_cma_reserve();
+
 #ifdef CONFIG_ARM64_4K_PAGES
 	hugetlb_cma_reserve(PUD_SHIFT - PAGE_SHIFT);
 #endif
-- 
2.23.0


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ