[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <e47f4bfd-3af7-f682-23a1-51800f992d35@oracle.com>
Date: Fri, 21 Aug 2020 10:52:50 -0700
From: Mike Kravetz <mike.kravetz@...cle.com>
To: Barry Song <song.bao.hua@...ilicon.com>, hch@....de,
m.szyprowski@...sung.com, robin.murphy@....com, will@...nel.org,
ganapatrao.kulkarni@...ium.com, catalin.marinas@....com,
akpm@...ux-foundation.org
Cc: iommu@...ts.linux-foundation.org,
linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org,
prime.zeng@...ilicon.com, huangdaode@...wei.com,
linuxarm@...wei.com
Subject: Re: [PATCH v7 0/3] make dma_alloc_coherent NUMA-aware by per-NUMA CMA
Hi Barry,
Sorry for jumping in so late.
On 8/21/20 4:33 AM, Barry Song wrote:
>
> with per-numa CMA, smmu will get memory from local numa node to save command
> queues and page tables. that means dma_unmap latency will be shrunk much.
Since per-node CMA areas for hugetlb was introduced, I have been thinking
about the limited number of CMA areas. In most configurations, I believe
it is limited to 7. And, IIRC it is not something that can be changed at
runtime, you need to reconfig and rebuild to increase the number. In contrast
some configs have NODES_SHIFT set to 10. I wasn't too worried because of
the limited hugetlb use case. However, this series is adding another user
of per-node CMA areas.
With more users, should try to sync up number of CMA areas and number of
nodes? Or, perhaps I am worrying about nothing?
--
Mike Kravetz
Powered by blists - more mailing lists