lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <0a6b3f53-79e5-af83-be39-f04c9acd8384@arm.com>
Date:   Tue, 23 Apr 2019 11:01:44 +0100
From:   Robin Murphy <robin.murphy@....com>
To:     Christoph Hellwig <hch@....de>
Cc:     Joerg Roedel <joro@...tes.org>,
        Catalin Marinas <catalin.marinas@....com>,
        Will Deacon <will.deacon@....com>,
        Tom Lendacky <thomas.lendacky@....com>,
        iommu@...ts.linux-foundation.org,
        linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 12/21] dma-iommu: factor atomic pool allocations into
 helpers

On 19/04/2019 09:23, Christoph Hellwig wrote:
> On Thu, Apr 18, 2019 at 07:15:00PM +0100, Robin Murphy wrote:
>> Still, I've worked in the vm_map_pages() stuff pending in MM and given them
>> the same treatment to finish the picture. Both x86_64_defconfig and
>> i386_defconfig do indeed compile and link fine as I expected, so I really
>> would like to understand the concern around #ifdefs better.
> 
> This looks generally fine to me.  One thing I'd like to do is to
> generally make use of the fact that __iommu_dma_get_pages returns NULL
> for the force contigous case as that cleans up a few things.  Also
> for the !DMA_REMAP case we need to try the page allocator when
> dma_alloc_from_contiguous does not return a page.  What do you thing
> of the following incremental diff?  If that is fine with you I can
> fold that in and add back in the remaining patches from my series
> not obsoleted by your patches and resend.

Wouldn't this suffice? Since we also use alloc_pages() in the coherent 
atomic case, the free path should already be able to deal with it.

Let me take a proper look at v3 and see how it all looks in context.

Robin.

----->8-----
diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
index 1bc8d1de1a1d..0a02ddc27862 100644
--- a/drivers/iommu/dma-iommu.c
+++ b/drivers/iommu/dma-iommu.c
@@ -944,6 +944,8 @@ static void *iommu_dma_alloc(struct device *dev, 
size_t size,
  		   (attrs & DMA_ATTR_FORCE_CONTIGUOUS)) {
  		page = dma_alloc_from_contiguous(dev, count, page_order,
  						 gfp & __GFP_NOWARN);
+		if (!page)
+			page = alloc_pages(gfp, page_order);
  	} else {
  		return iommu_dma_alloc_remap(dev, size, handle, gfp, attrs);
  	}

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ