[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CAJuCfpEujbaSrk5+mR=+vWqwSu-t52fVmbPf5msnpduSB6AT2Q@mail.gmail.com>
Date: Fri, 21 Mar 2025 09:14:24 -0700
From: Suren Baghdasaryan <surenb@...gle.com>
To: Conor Dooley <conor@...nel.org>
Cc: akpm@...ux-foundation.org, willy@...radead.org, david@...hat.com,
vbabka@...e.cz, lorenzo.stoakes@...cle.com, liam.howlett@...cle.com,
alexandru.elisei@....com, peterx@...hat.com, hannes@...xchg.org,
mhocko@...nel.org, m.szyprowski@...sung.com, iamjoonsoo.kim@....com,
mina86@...a86.com, axboe@...nel.dk, viro@...iv.linux.org.uk,
brauner@...nel.org, hch@...radead.org, jack@...e.cz, hbathini@...ux.ibm.com,
sourabhjain@...ux.ibm.com, ritesh.list@...il.com, aneesh.kumar@...nel.org,
bhelgaas@...gle.com, sj@...nel.org, fvdl@...gle.com, ziy@...dia.com,
yuzhao@...gle.com, minchan@...nel.org, linux-mm@...ck.org,
linuxppc-dev@...ts.ozlabs.org, linux-block@...r.kernel.org,
linux-fsdevel@...r.kernel.org, iommu@...ts.linux.dev,
linux-kernel@...r.kernel.org, Minchan Kim <minchan@...gle.com>
Subject: Re: [RFC 3/3] mm: integrate GCMA with CMA using dt-bindings
On Fri, Mar 21, 2025 at 7:06 AM Conor Dooley <conor@...nel.org> wrote:
>
> On Thu, Mar 20, 2025 at 10:39:31AM -0700, Suren Baghdasaryan wrote:
> > This patch introduces a new "guarantee" property for shared-dma-pool.
> > With this property, admin can create specific memory pool as
> > GCMA-based CMA if they care about allocation success rate and latency.
> > The downside of GCMA is that it can host only clean file-backed pages
> > since it's using cleancache as its secondary user.
> >
> > Signed-off-by: Minchan Kim <minchan@...gle.com>
> > Signed-off-by: Suren Baghdasaryan <surenb@...gle.com>
> > ---
> > arch/powerpc/kernel/fadump.c | 2 +-
> > include/linux/cma.h | 2 +-
> > kernel/dma/contiguous.c | 11 ++++++++++-
> > mm/cma.c | 33 ++++++++++++++++++++++++++-------
> > mm/cma.h | 1 +
> > mm/cma_sysfs.c | 10 ++++++++++
> > 6 files changed, 49 insertions(+), 10 deletions(-)
> >
> > diff --git a/arch/powerpc/kernel/fadump.c b/arch/powerpc/kernel/fadump.c
> > index 4b371c738213..4eb7be0cdcdb 100644
> > --- a/arch/powerpc/kernel/fadump.c
> > +++ b/arch/powerpc/kernel/fadump.c
> > @@ -111,7 +111,7 @@ void __init fadump_cma_init(void)
> > return;
> > }
> >
> > - rc = cma_init_reserved_mem(base, size, 0, "fadump_cma", &fadump_cma);
> > + rc = cma_init_reserved_mem(base, size, 0, "fadump_cma", &fadump_cma, false);
> > if (rc) {
> > pr_err("Failed to init cma area for firmware-assisted dump,%d\n", rc);
> > /*
> > diff --git a/include/linux/cma.h b/include/linux/cma.h
> > index 62d9c1cf6326..3207db979e94 100644
> > --- a/include/linux/cma.h
> > +++ b/include/linux/cma.h
> > @@ -46,7 +46,7 @@ extern int __init cma_declare_contiguous_multi(phys_addr_t size,
> > extern int cma_init_reserved_mem(phys_addr_t base, phys_addr_t size,
> > unsigned int order_per_bit,
> > const char *name,
> > - struct cma **res_cma);
> > + struct cma **res_cma, bool gcma);
> > extern struct page *cma_alloc(struct cma *cma, unsigned long count, unsigned int align,
> > bool no_warn);
> > extern bool cma_pages_valid(struct cma *cma, const struct page *pages, unsigned long count);
> > diff --git a/kernel/dma/contiguous.c b/kernel/dma/contiguous.c
> > index 055da410ac71..a68b3123438c 100644
> > --- a/kernel/dma/contiguous.c
> > +++ b/kernel/dma/contiguous.c
> > @@ -459,6 +459,7 @@ static int __init rmem_cma_setup(struct reserved_mem *rmem)
> > unsigned long node = rmem->fdt_node;
> > bool default_cma = of_get_flat_dt_prop(node, "linux,cma-default", NULL);
> > struct cma *cma;
> > + bool gcma;
> > int err;
> >
> > if (size_cmdline != -1 && default_cma) {
> > @@ -476,7 +477,15 @@ static int __init rmem_cma_setup(struct reserved_mem *rmem)
> > return -EINVAL;
> > }
> >
> > - err = cma_init_reserved_mem(rmem->base, rmem->size, 0, rmem->name, &cma);
> > + gcma = !!of_get_flat_dt_prop(node, "guarantee", NULL);
>
> When this (or if I guess) this goes !RFC, you will need to document this
> new property that you're adding.
Definitely. I'll document the cleancache and GCMA as well.
Thanks!
Powered by blists - more mailing lists