[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <B926444035E5E2439431908E3842AFD25A4F81@DGGEMI525-MBS.china.huawei.com>
Date: Thu, 23 Jul 2020 12:08:27 +0000
From: "Song Bao Hua (Barry Song)" <song.bao.hua@...ilicon.com>
To: Christoph Hellwig <hch@....de>
CC: "m.szyprowski@...sung.com" <m.szyprowski@...sung.com>,
"robin.murphy@....com" <robin.murphy@....com>,
"will@...nel.org" <will@...nel.org>,
"ganapatrao.kulkarni@...ium.com" <ganapatrao.kulkarni@...ium.com>,
"catalin.marinas@....com" <catalin.marinas@....com>,
"iommu@...ts.linux-foundation.org" <iommu@...ts.linux-foundation.org>,
Linuxarm <linuxarm@...wei.com>,
"linux-arm-kernel@...ts.infradead.org"
<linux-arm-kernel@...ts.infradead.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Jonathan Cameron <jonathan.cameron@...wei.com>,
Nicolas Saenz Julienne <nsaenzjulienne@...e.de>,
Steve Capper <steve.capper@....com>,
Andrew Morton <akpm@...ux-foundation.org>,
Mike Rapoport <rppt@...ux.ibm.com>,
"Zengtao (B)" <prime.zeng@...ilicon.com>,
huangdaode <huangdaode@...wei.com>
Subject: RE: [PATCH v3 1/2] dma-direct: provide the ability to reserve
per-numa CMA
> -----Original Message-----
> From: Christoph Hellwig [mailto:hch@....de]
> Sent: Friday, July 24, 2020 12:01 AM
> To: Song Bao Hua (Barry Song) <song.bao.hua@...ilicon.com>
> Cc: Christoph Hellwig <hch@....de>; m.szyprowski@...sung.com;
> robin.murphy@....com; will@...nel.org; ganapatrao.kulkarni@...ium.com;
> catalin.marinas@....com; iommu@...ts.linux-foundation.org; Linuxarm
> <linuxarm@...wei.com>; linux-arm-kernel@...ts.infradead.org;
> linux-kernel@...r.kernel.org; Jonathan Cameron
> <jonathan.cameron@...wei.com>; Nicolas Saenz Julienne
> <nsaenzjulienne@...e.de>; Steve Capper <steve.capper@....com>; Andrew
> Morton <akpm@...ux-foundation.org>; Mike Rapoport <rppt@...ux.ibm.com>;
> Zengtao (B) <prime.zeng@...ilicon.com>; huangdaode
> <huangdaode@...wei.com>
> Subject: Re: [PATCH v3 1/2] dma-direct: provide the ability to reserve
> per-numa CMA
>
> On Wed, Jul 22, 2020 at 09:41:50PM +0000, Song Bao Hua (Barry Song)
> wrote:
> > I got a kernel robot warning which said dev should be checked before
> > being accessed when I did a similar change in v1. Probably it was an
> > invalid warning if dev should never be null.
>
> That usually shows up if a function is inconsistent about sometimes checking it
> and sometimes now.
>
> > Yes, it looks much better.
>
> Below is a prep patch to rebase on top of:
Thanks for letting me know.
Will rebase on top of your patch.
>
> ---
> From b81a5e1da65fce9750f0a8b66dbb6f842cbfdd4d Mon Sep 17 00:00:00
> 2001
> From: Christoph Hellwig <hch@....de>
> Date: Wed, 22 Jul 2020 16:33:43 +0200
> Subject: dma-contiguous: cleanup dma_alloc_contiguous
>
> Split out a cma_alloc_aligned helper to deal with the "interesting"
> calling conventions for cma_alloc, which then allows to the main function to
> be written straight forward. This also takes advantage of the fact that NULL
> dev arguments have been gone from the DMA API for a while.
>
> Signed-off-by: Christoph Hellwig <hch@....de>
> ---
> kernel/dma/contiguous.c | 31 ++++++++++++++-----------------
> 1 file changed, 14 insertions(+), 17 deletions(-)
>
> diff --git a/kernel/dma/contiguous.c b/kernel/dma/contiguous.c index
> 15bc5026c485f2..cff7e60968b9e1 100644
> --- a/kernel/dma/contiguous.c
> +++ b/kernel/dma/contiguous.c
> @@ -215,6 +215,13 @@ bool dma_release_from_contiguous(struct device
> *dev, struct page *pages,
> return cma_release(dev_get_cma_area(dev), pages, count); }
>
> +static struct page *cma_alloc_aligned(struct cma *cma, size_t size,
> +gfp_t gfp) {
> + unsigned int align = min(get_order(size), CONFIG_CMA_ALIGNMENT);
> +
> + return cma_alloc(cma, size >> PAGE_SHIFT, align, gfp & __GFP_NOWARN);
> +}
> +
> /**
> * dma_alloc_contiguous() - allocate contiguous pages
> * @dev: Pointer to device for which the allocation is performed.
> @@ -231,24 +238,14 @@ bool dma_release_from_contiguous(struct device
> *dev, struct page *pages,
> */
> struct page *dma_alloc_contiguous(struct device *dev, size_t size, gfp_t gfp)
> {
> - size_t count = size >> PAGE_SHIFT;
> - struct page *page = NULL;
> - struct cma *cma = NULL;
> -
> - if (dev && dev->cma_area)
> - cma = dev->cma_area;
> - else if (count > 1)
> - cma = dma_contiguous_default_area;
> -
> /* CMA can be used only in the context which permits sleeping */
> - if (cma && gfpflags_allow_blocking(gfp)) {
> - size_t align = get_order(size);
> - size_t cma_align = min_t(size_t, align, CONFIG_CMA_ALIGNMENT);
> -
> - page = cma_alloc(cma, count, cma_align, gfp & __GFP_NOWARN);
> - }
> -
> - return page;
> + if (!gfpflags_allow_blocking(gfp))
> + return NULL;
> + if (dev->cma_area)
> + return cma_alloc_aligned(dev->cma_area, size, gfp);
> + if (size <= PAGE_SIZE || !dma_contiguous_default_area)
> + return NULL;
> + return cma_alloc_aligned(dma_contiguous_default_area, size, gfp);
> }
>
> /**
> --
> 2.27.0
Thanks
Barry
Powered by blists - more mailing lists