[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAKTKpr4S_v+Fm9K0F8xZTuwKSArZapXr-4Xo1YoNn8=X70H83g@mail.gmail.com>
Date: Tue, 20 Nov 2018 15:39:30 +0530
From: Ganapatrao Kulkarni <gklkml16@...il.com>
To: John Garry <john.garry@...wei.com>
Cc: Joerg Roedel <joro@...tes.org>, Christoph Hellwig <hch@....de>,
Marek Szyprowski <m.szyprowski@...sung.com>,
Robin Murphy <robin.murphy@....com>,
iommu@...ts.linux-foundation.org,
LKML <linux-kernel@...r.kernel.org>,
Linuxarm <linuxarm@...wei.com>,
Will Deacon <will.deacon@....com>,
Ganapatrao Kulkarni <ganapatrao.kulkarni@...ium.com>
Subject: Re: [PATCH] iommu/dma: Use NUMA aware memory allocations in __iommu_dma_alloc_pages()
Hi John,
On Tue, Nov 20, 2018 at 3:35 PM John Garry <john.garry@...wei.com> wrote:
>
> On 08/11/2018 17:55, John Garry wrote:
> > Change function __iommu_dma_alloc_pages() to allocate memory/pages
> > for DMA from respective device NUMA node.
> >
>
> Ping.... a friendly reminder on this patch.
>
> Thanks
>
> > Originally-from: Ganapatrao Kulkarni <ganapatrao.kulkarni@...ium.com>
> > Signed-off-by: John Garry <john.garry@...wei.com>
> > ---
> >
> > This patch was originally posted by Ganapatrao in [1] *.
> >
> > However, after initial review, it was never reposted (due to lack of
> > cycles, I think). In addition, the functionality in its sibling patches
> > were merged through patches, as mentioned in [2]; this also refers to a
> > discussion on device local allocations vs CPU local allocations for DMA
> > pool, and which is better [3].
> >
> > However, as mentioned in [3], dma_alloc_coherent() uses the locality
> > information from the device - as in direct DMA - so this patch is just
> > applying this same policy.
> >
> > [1] https://lore.kernel.org/patchwork/patch/833004/
> > [2] https://lkml.org/lkml/2018/8/22/391
> > [3] https://www.mail-archive.com/linux-kernel@vger.kernel.org/msg1692998.html
> >
> > * Authorship on this updated patch may need to be fixed - I add not want to
> > add Ganapatrao's SOB without permission.
thanks for taking this up. please feel free to add my SoB.
> >
> > diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
> > index d1b0475..ada00bc 100644
> > --- a/drivers/iommu/dma-iommu.c
> > +++ b/drivers/iommu/dma-iommu.c
> > @@ -449,20 +449,17 @@ static void __iommu_dma_free_pages(struct page **pages, int count)
> > kvfree(pages);
> > }
> >
> > -static struct page **__iommu_dma_alloc_pages(unsigned int count,
> > - unsigned long order_mask, gfp_t gfp)
> > +static struct page **__iommu_dma_alloc_pages(struct device *dev,
> > + unsigned int count, unsigned long order_mask, gfp_t gfp)
> > {
> > struct page **pages;
> > - unsigned int i = 0, array_size = count * sizeof(*pages);
> > + unsigned int i = 0, nid = dev_to_node(dev);
> >
> > order_mask &= (2U << MAX_ORDER) - 1;
> > if (!order_mask)
> > return NULL;
> >
> > - if (array_size <= PAGE_SIZE)
> > - pages = kzalloc(array_size, GFP_KERNEL);
> > - else
> > - pages = vzalloc(array_size);
> > + pages = kvzalloc_node(count * sizeof(*pages), GFP_KERNEL, nid);
> > if (!pages)
> > return NULL;
> >
> > @@ -483,8 +480,10 @@ static struct page **__iommu_dma_alloc_pages(unsigned int count,
> > unsigned int order = __fls(order_mask);
> >
> > order_size = 1U << order;
> > - page = alloc_pages((order_mask - order_size) ?
> > - gfp | __GFP_NORETRY : gfp, order);
> > + page = alloc_pages_node(nid,
> > + (order_mask - order_size) ?
> > + gfp | __GFP_NORETRY : gfp,
> > + order);
> > if (!page)
> > continue;
> > if (!order)
> > @@ -569,7 +568,8 @@ struct page **iommu_dma_alloc(struct device *dev, size_t size, gfp_t gfp,
> > alloc_sizes = min_size;
> >
> > count = PAGE_ALIGN(size) >> PAGE_SHIFT;
> > - pages = __iommu_dma_alloc_pages(count, alloc_sizes >> PAGE_SHIFT, gfp);
> > + pages = __iommu_dma_alloc_pages(dev, count, alloc_sizes >> PAGE_SHIFT,
> > + gfp);
> > if (!pages)
> > return NULL;
> >
> >
>
>
thanks
Ganapat
Powered by blists - more mailing lists