[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <f59852aa-394c-3165-f7c0-fa6cdcea059f@arm.com>
Date: Wed, 6 Feb 2019 15:28:28 +0000
From: Robin Murphy <robin.murphy@....com>
To: Christoph Hellwig <hch@....de>
Cc: Joerg Roedel <joro@...tes.org>,
Catalin Marinas <catalin.marinas@....com>,
Will Deacon <will.deacon@....com>,
Tom Lendacky <thomas.lendacky@....com>,
iommu@...ts.linux-foundation.org,
linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 03/19] dma-iommu: don't use a scatterlist in
iommu_dma_alloc
On 01/02/2019 16:16, Christoph Hellwig wrote:
> On Fri, Feb 01, 2019 at 03:24:45PM +0000, Robin Murphy wrote:
>> On 14/01/2019 09:41, Christoph Hellwig wrote:
>>> Directly iterating over the pages makes the code a bit simpler and
>>> prepares for the following changes.
>>
>> It also defeats the whole purpose of __iommu_dma_alloc_pages(), so I'm not
>> really buying the simplification angle - you've *seen* that code, right? ;)
>
> How does it defeat the purpose of __iommu_dma_alloc_pages?
Because if iommu_map() only gets called at PAGE_SIZE granularity, then
the IOMMU PTEs will be created at PAGE_SIZE (or smaller) granularity, so
any effort to get higher-order allocations matching larger IOMMU block
sizes is wasted, and we may as well have just done this:
for (i = 0; i < count; i++) {
struct page *page = alloc_page(gfp);
...
iommu_map(..., page_to_phys(page), PAGE_SIZE, ...);
}
Really, it's a shame we have to split huge pages for the CPU remap,
since in the common case the CPU MMU will have a matching block size,
but IIRC there was something in vmap() or thereabouts that explicitly
chokes on them.
Robin.
Powered by blists - more mailing lists