[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <181f20d0403.121f433c8600165.2068876337784123868@linux.beauty>
Date: Tue, 12 Jul 2022 18:55:48 +0800
From: Li Chen <me@...ux.beauty>
To: "Arnd Bergmann" <arnd@...db.de>
Cc: "Catalin Marinas" <catalin.marinas@....com>,
"Will Deacon" <will@...nel.org>,
"Rob Herring" <robh+dt@...nel.org>,
"Frank Rowand" <frowand.list@...il.com>,
"Andrew Morton" <akpm@...ux-foundation.org>,
"Li Chen" <lchen@...arella.com>,
"Linux ARM" <linux-arm-kernel@...ts.infradead.org>,
"Linux Kernel Mailing List" <linux-kernel@...r.kernel.org>,
"DTML" <devicetree@...r.kernel.org>,
"Linux-MM" <linux-mm@...ck.org>
Subject: Re: [PATCH 4/4] sample/reserved_mem: Introduce a sample of struct
page and dio support to no-map rmem
Hi Arnd,
---- On Tue, 12 Jul 2022 18:08:10 +0800 Arnd Bergmann <arnd@...db.de> wrote ---
> On Tue, Jul 12, 2022 at 11:58 AM Li Chen <me@...ux.beauty> wrote:
> > > On Tue, Jul 12, 2022 at 2:26 AM Li Chen <me@...ux.beauty> wrote:
> > > > ---- On Mon, 11 Jul 2022 21:28:10 +0800 Arnd Bergmann <arnd@...db.de> wrote ---
> > > > > On Mon, Jul 11, 2022 at 2:24 PM Li Chen <me@...ux.beauty> wrote:
> > > > > The problem here is that the DT is meant to describe the platform in an OS
> > > > > independent way, so having a binding that just corresponds to a user space
> > > > > interface is not a good abstraction.
> > > >
> > > > Gotcha, but IMO dts + rmem is the only choice for our use case. In our real
> > > > case, we use reg instead of size to specify the physical address, so
> > > > memremap cannot be used.
> > >
> > > Does your hardware require a fixed address for the buffer? If it can be
> > > anywhere in memory (or at least within a certain range) but just has to
> > > be physically contiguous, the normal way would be to use a CMA area
> > > to allocate from, which gives you 'struct page' backed pages.
> >
> > The limitation is our DSP can only access 32bit memory, but total dram is > 4G, so I cannot use
> > "size = <...>" in our real case (it might get memory above 4G). I'm not sure if other vendors' DSP also has
> > this limitation, if so, how do they deal with it if throughput matters.
>
> This is a common limitation that gets handled automatically by setting
> the dma_mask of the device through the dma-ranges property in DT.
> When the driver does dma_alloc_coherent() or similar to gets its buffer,
> it will then allocate pages below this boundary.
Thanks for the tip! I wasn't aware that dma-ranges can be used by devices other than PCIe controllers.
> If you need a large contiguous memory area, then using CMA allows
> you to specify a region of memory that is kept reserved for DMA
> allocations, so a call to dma_alloc_coherent() on your device will
> get contiguous pages from that area, and move other data in those
> pages elsewhere if necessary. non-movable data is allocated from
> pages outside of the CMA reserved area in this case.
We need a large memory pool, around 2G. I will try CMA and dma-ranges later!
Regards,
Li
Powered by blists - more mailing lists