[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <a3990c9921a44884b0adc448d1281b0a@hisilicon.com>
Date: Tue, 7 Dec 2021 05:37:50 +0000
From: "Song Bao Hua (Barry Song)" <song.bao.hua@...ilicon.com>
To: Robin Murphy <robin.murphy@....com>,
Jay Chen <jkchen@...ux.alibaba.com>, "hch@....de" <hch@....de>,
"m.szyprowski@...sung.com" <m.szyprowski@...sung.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"iommu@...ts.linux-foundation.org" <iommu@...ts.linux-foundation.org>
CC: "zhangliguang@...ux.alibaba.com" <zhangliguang@...ux.alibaba.com>
Subject: RE: [RFC PATCH] provide per numa cma with an initial default size
> -----Original Message-----
> From: Robin Murphy [mailto:robin.murphy@....com]
> Sent: Tuesday, December 7, 2021 4:01 AM
> To: Jay Chen <jkchen@...ux.alibaba.com>; hch@....de; m.szyprowski@...sung.com;
> linux-kernel@...r.kernel.org; iommu@...ts.linux-foundation.org; Song Bao Hua
> (Barry Song) <song.bao.hua@...ilicon.com>
> Cc: zhangliguang@...ux.alibaba.com
> Subject: Re: [RFC PATCH] provide per numa cma with an initial default size
>
> [ +Barry ]
>
> On 2021-11-30 07:45, Jay Chen wrote:
> > In the actual production environment, when we open
> > cma and per numa cma, if we do not increase the per
> > numa size configuration in cmdline, we find that our
> > performance has dropped by 20%.
> > Through analysis, we found that the default size of
> > per numa is 0, which causes the driver to allocate
> > memory from cma, which affects performance. Therefore,
> > we think we need to provide a default size.
>
> Looking back at some of the review discussions, I think it may have been
> intentional that per-node areas are not allocated by default, since it's
> the kind of thing that really wants to be tuned to the particular system
> and workload, and as such it seemed reasonable to expect users to
> provide a value on the command line if they wanted the feature. That's
> certainly what the Kconfig text implies.
>
> Thanks,
> Robin.
>
> > Signed-off-by: Jay Chen <jkchen@...ux.alibaba.com>
> > ---
> > kernel/dma/contiguous.c | 2 +-
> > 1 file changed, 1 insertion(+), 1 deletion(-)
> >
> > diff --git a/kernel/dma/contiguous.c b/kernel/dma/contiguous.c
> > index 3d63d91cba5c..3bef8bf371d9 100644
> > --- a/kernel/dma/contiguous.c
> > +++ b/kernel/dma/contiguous.c
> > @@ -99,7 +99,7 @@ early_param("cma", early_cma);
> > #ifdef CONFIG_DMA_PERNUMA_CMA
> >
> > static struct cma *dma_contiguous_pernuma_area[MAX_NUMNODES];
> > -static phys_addr_t pernuma_size_bytes __initdata;
> > +static phys_addr_t pernuma_size_bytes __initdata = size_bytes;
I don't think the size for the default cma can apply to
per-numa CMA.
We did have some discussion regarding the size when per-numa cma was
added, and it was done by a Kconfig option. I think we have decided
to not have any default size other than 0. Default size 0 is perfect,
this will enforce users to set a proper "cma_pernuma=" bootargs.
> >
> > static int __init early_cma_pernuma(char *p)
> > {
> >
Thanks
Barry
Powered by blists - more mailing lists