[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <1542721320-233109-1-git-send-email-john.garry@huawei.com>
Date: Tue, 20 Nov 2018 21:42:00 +0800
From: John Garry <john.garry@...wei.com>
To: <joro@...tes.org>
CC: <robin.murphy@....com>, <will.deacon@....com>,
<linux-kernel@...r.kernel.org>, <iommu@...ts.linux-foundation.org>,
<ganapatrao.kulkarni@...ium.com>, <hch@....de>,
<m.szyprowski@...sung.com>, John Garry <john.garry@...wei.com>
Subject: [PATCH v2] iommu/dma: Use NUMA aware memory allocations in __iommu_dma_alloc_pages()
From: Ganapatrao Kulkarni <ganapatrao.kulkarni@...ium.com>
Change function __iommu_dma_alloc_pages() to allocate memory/pages
for DMA from respective device NUMA node.
Signed-off-by: Ganapatrao Kulkarni <ganapatrao.kulkarni@...ium.com>
[JPG: Modifed to use kvzalloc() and fixed indentation]
Signed-off-by: John Garry <john.garry@...wei.com>
---
Difference v1->v2:
- Add Ganapatrao's tag and change author
This patch was originally posted by Ganapatrao in [1].
However, after initial review, it was never reposted (due to lack of
cycles, I think). In addition, the functionality in its sibling patches
were merged through patches, as mentioned in [2]; this also refers to a
discussion on device local allocations vs CPU local allocations for DMA
pool, and which is better [3].
However, as mentioned in [3], dma_alloc_coherent() uses the locality
information from the device - as in direct DMA - so this patch is just
applying this same policy.
[1] https://lore.kernel.org/patchwork/patch/833004/
[2] https://lkml.org/lkml/2018/8/22/391
[3] https://www.mail-archive.com/linux-kernel@vger.kernel.org/msg1692998.html
diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
index d1b0475..ada00bc 100644
--- a/drivers/iommu/dma-iommu.c
+++ b/drivers/iommu/dma-iommu.c
@@ -449,20 +449,17 @@ static void __iommu_dma_free_pages(struct page **pages, int count)
kvfree(pages);
}
-static struct page **__iommu_dma_alloc_pages(unsigned int count,
- unsigned long order_mask, gfp_t gfp)
+static struct page **__iommu_dma_alloc_pages(struct device *dev,
+ unsigned int count, unsigned long order_mask, gfp_t gfp)
{
struct page **pages;
- unsigned int i = 0, array_size = count * sizeof(*pages);
+ unsigned int i = 0, nid = dev_to_node(dev);
order_mask &= (2U << MAX_ORDER) - 1;
if (!order_mask)
return NULL;
- if (array_size <= PAGE_SIZE)
- pages = kzalloc(array_size, GFP_KERNEL);
- else
- pages = vzalloc(array_size);
+ pages = kvzalloc_node(count * sizeof(*pages), GFP_KERNEL, nid);
if (!pages)
return NULL;
@@ -483,8 +480,10 @@ static struct page **__iommu_dma_alloc_pages(unsigned int count,
unsigned int order = __fls(order_mask);
order_size = 1U << order;
- page = alloc_pages((order_mask - order_size) ?
- gfp | __GFP_NORETRY : gfp, order);
+ page = alloc_pages_node(nid,
+ (order_mask - order_size) ?
+ gfp | __GFP_NORETRY : gfp,
+ order);
if (!page)
continue;
if (!order)
@@ -569,7 +568,8 @@ struct page **iommu_dma_alloc(struct device *dev, size_t size, gfp_t gfp,
alloc_sizes = min_size;
count = PAGE_ALIGN(size) >> PAGE_SHIFT;
- pages = __iommu_dma_alloc_pages(count, alloc_sizes >> PAGE_SHIFT, gfp);
+ pages = __iommu_dma_alloc_pages(dev, count, alloc_sizes >> PAGE_SHIFT,
+ gfp);
if (!pages)
return NULL;
--
1.9.1
Powered by blists - more mailing lists