[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <be6141f0-ae12-378c-9628-dee306f803cf@huawei.com>
Date: Tue, 20 Nov 2018 10:19:22 +0000
From: John Garry <john.garry@...wei.com>
To: Ganapatrao Kulkarni <gklkml16@...il.com>
CC: Joerg Roedel <joro@...tes.org>, Christoph Hellwig <hch@....de>,
"Marek Szyprowski" <m.szyprowski@...sung.com>,
Robin Murphy <robin.murphy@....com>,
<iommu@...ts.linux-foundation.org>,
LKML <linux-kernel@...r.kernel.org>,
Linuxarm <linuxarm@...wei.com>,
Will Deacon <will.deacon@....com>,
"Ganapatrao Kulkarni" <ganapatrao.kulkarni@...ium.com>
Subject: Re: [PATCH] iommu/dma: Use NUMA aware memory allocations in
__iommu_dma_alloc_pages()
On 20/11/2018 10:09, Ganapatrao Kulkarni wrote:
> Hi John,
>
> On Tue, Nov 20, 2018 at 3:35 PM John Garry <john.garry@...wei.com> wrote:
>>
>> On 08/11/2018 17:55, John Garry wrote:
>>> Change function __iommu_dma_alloc_pages() to allocate memory/pages
>>> for DMA from respective device NUMA node.
>>>
>>
>> Ping.... a friendly reminder on this patch.
>>
>> Thanks
>>
>>> Originally-from: Ganapatrao Kulkarni <ganapatrao.kulkarni@...ium.com>
>>> Signed-off-by: John Garry <john.garry@...wei.com>
>>> ---
>>>
>>> This patch was originally posted by Ganapatrao in [1] *.
>>>
>>> However, after initial review, it was never reposted (due to lack of
>>> cycles, I think). In addition, the functionality in its sibling patches
>>> were merged through patches, as mentioned in [2]; this also refers to a
>>> discussion on device local allocations vs CPU local allocations for DMA
>>> pool, and which is better [3].
>>>
>>> However, as mentioned in [3], dma_alloc_coherent() uses the locality
>>> information from the device - as in direct DMA - so this patch is just
>>> applying this same policy.
>>>
>>> [1] https://lore.kernel.org/patchwork/patch/833004/
>>> [2] https://lkml.org/lkml/2018/8/22/391
>>> [3] https://www.mail-archive.com/linux-kernel@vger.kernel.org/msg1692998.html
>>>
>>> * Authorship on this updated patch may need to be fixed - I add not want to
>>> add Ganapatrao's SOB without permission.
>
> thanks for taking this up. please feel free to add my SoB.
OK, I will also make you the author and repost.
Thanks,
John
>>>
>>> diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
>>> index d1b0475..ada00bc 100644
>>> --- a/drivers/iommu/dma-iommu.c
>>> +++ b/drivers/iommu/dma-iommu.c
>>> @@ -449,20 +449,17 @@ static void __iommu_dma_free_pages(struct page **pages, int count)
>>> kvfree(pages);
>>> }
>>>
>>> -static struct page **__iommu_dma_alloc_pages(unsigned int count,
>>> - unsigned long order_mask, gfp_t gfp)
>>> +static struct page **__iommu_dma_alloc_pages(struct device *dev,
>>> + unsigned int count, unsigned long order_mask, gfp_t gfp)
>>> {
>>> struct page **pages;
>>> - unsigned int i = 0, array_size = count * sizeof(*pages);
>>> + unsigned int i = 0, nid = dev_to_node(dev);
>>>
>>> order_mask &= (2U << MAX_ORDER) - 1;
>>> if (!order_mask)
>>> return NULL;
>>>
>>> - if (array_size <= PAGE_SIZE)
>>> - pages = kzalloc(array_size, GFP_KERNEL);
>>> - else
>>> - pages = vzalloc(array_size);
>>> + pages = kvzalloc_node(count * sizeof(*pages), GFP_KERNEL, nid);
>>> if (!pages)
>>> return NULL;
>>>
>>> @@ -483,8 +480,10 @@ static struct page **__iommu_dma_alloc_pages(unsigned int count,
>>> unsigned int order = __fls(order_mask);
>>>
>>> order_size = 1U << order;
>>> - page = alloc_pages((order_mask - order_size) ?
>>> - gfp | __GFP_NORETRY : gfp, order);
>>> + page = alloc_pages_node(nid,
>>> + (order_mask - order_size) ?
>>> + gfp | __GFP_NORETRY : gfp,
>>> + order);
>>> if (!page)
>>> continue;
>>> if (!order)
>>> @@ -569,7 +568,8 @@ struct page **iommu_dma_alloc(struct device *dev, size_t size, gfp_t gfp,
>>> alloc_sizes = min_size;
>>>
>>> count = PAGE_ALIGN(size) >> PAGE_SHIFT;
>>> - pages = __iommu_dma_alloc_pages(count, alloc_sizes >> PAGE_SHIFT, gfp);
>>> + pages = __iommu_dma_alloc_pages(dev, count, alloc_sizes >> PAGE_SHIFT,
>>> + gfp);
>>> if (!pages)
>>> return NULL;
>>>
>>>
>>
>>
>
> thanks
> Ganapat
>
> .
>
Powered by blists - more mailing lists