[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190211160049.GB27745@lst.de>
Date: Mon, 11 Feb 2019 17:00:49 +0100
From: Christoph Hellwig <hch@....de>
To: Robin Murphy <robin.murphy@....com>
Cc: Christoph Hellwig <hch@....de>, Joerg Roedel <joro@...tes.org>,
Catalin Marinas <catalin.marinas@....com>,
Will Deacon <will.deacon@....com>,
Tom Lendacky <thomas.lendacky@....com>,
iommu@...ts.linux-foundation.org,
linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 03/19] dma-iommu: don't use a scatterlist in
iommu_dma_alloc
On Wed, Feb 06, 2019 at 03:28:28PM +0000, Robin Murphy wrote:
> Because if iommu_map() only gets called at PAGE_SIZE granularity, then the
> IOMMU PTEs will be created at PAGE_SIZE (or smaller) granularity, so any
> effort to get higher-order allocations matching larger IOMMU block sizes is
> wasted, and we may as well have just done this:
>
> for (i = 0; i < count; i++) {
> struct page *page = alloc_page(gfp);
> ...
> iommu_map(..., page_to_phys(page), PAGE_SIZE, ...);
> }
True. I've dropped this patch.
> Really, it's a shame we have to split huge pages for the CPU remap, since
> in the common case the CPU MMU will have a matching block size, but IIRC
> there was something in vmap() or thereabouts that explicitly chokes on
> them.
That just needs a volunteer to fix the implementation, as there is no
fundamental reason not to remap large pages.
Powered by blists - more mailing lists