[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20191021091806.v2buuugck5maxah5@DESKTOP-E1NTVVP.localdomain>
Date: Mon, 21 Oct 2019 09:18:07 +0000
From: Brian Starkey <Brian.Starkey@....com>
To: "Andrew F. Davis" <afd@...com>
CC: Ayan Halder <Ayan.Halder@....com>,
John Stultz <john.stultz@...aro.org>, nd <nd@....com>,
Sudipto Paul <Sudipto.Paul@....com>,
Vincent Donnefort <Vincent.Donnefort@....com>,
Chenbo Feng <fengc@...gle.com>,
Alistair Strachan <astrachan@...gle.com>,
Liam Mark <lmark@...eaurora.org>,
lkml <linux-kernel@...r.kernel.org>,
Christoph Hellwig <hch@...radead.org>,
DRI mailing list <dri-devel@...ts.freedesktop.org>,
Hridya Valsaraju <hridya@...gle.com>,
Pratik Patel <pratikp@...eaurora.org>
Subject: Re: [RESEND][PATCH v8 0/5] DMA-BUF Heaps (destaging ION)
On Sat, Oct 19, 2019 at 09:41:27AM -0400, Andrew F. Davis wrote:
> On 10/18/19 2:57 PM, Ayan Halder wrote:
> > On Fri, Oct 18, 2019 at 11:49:22AM -0700, John Stultz wrote:
> >> On Fri, Oct 18, 2019 at 11:41 AM Ayan Halder <Ayan.Halder@....com> wrote:
> >>> On Fri, Oct 18, 2019 at 09:55:17AM +0000, Brian Starkey wrote:
> >>>> On Thu, Oct 17, 2019 at 01:57:45PM -0700, John Stultz wrote:
> >>>>> On Thu, Oct 17, 2019 at 12:29 PM Andrew F. Davis <afd@...com> wrote:
> >>>>>> On 10/17/19 3:14 PM, John Stultz wrote:
> >>>>>>> But if the objection stands, do you have a proposal for an alternative
> >>>>>>> way to enumerate a subset of CMA heaps?
> >>>>>>>
> >>>>>> When in staging ION had to reach into the CMA framework as the other
> >>>>>> direction would not be allowed, so cma_for_each_area() was added. If
> >>>>>> DMA-BUF heaps is not in staging then we can do the opposite, and have
> >>>>>> the CMA framework register heaps itself using our framework. That way
> >>>>>> the CMA system could decide what areas to export or not (maybe based on
> >>>>>> a DT property or similar).
> >>>>>
> >>>>> Ok. Though the CMA core doesn't have much sense of DT details either,
> >>>>> so it would probably have to be done in the reserved_mem logic, which
> >>>>> doesn't feel right to me.
> >>>>>
> >>>>> I'd probably guess we should have some sort of dt binding to describe
> >>>>> a dmabuf cma heap and from that node link to a CMA node via a
> >>>>> memory-region phandle. Along with maybe the default heap as well? Not
> >>>>> eager to get into another binding review cycle, and I'm not sure what
> >>>>> non-DT systems will do yet, but I'll take a shot at it and iterate.
> >>>>>
> >>>>>> The end result is the same so we can make this change later (it has to
> >>>>>> come after DMA-BUF heaps is in anyway).
> >>>>>
> >>>>> Well, I'm hesitant to merge code that exposes all the CMA heaps and
> >>>>> then add patches that becomes more selective, should anyone depend on
> >>>>> the initial behavior. :/
> >>>>
> >>>> How about only auto-adding the system default CMA region (cma->name ==
> >>>> "reserved")?
> >>>>
> >>>> And/or the CMA auto-add could be behind a config option? It seems a
> >>>> shame to further delay this, and the CMA heap itself really is useful.
> >>>>
> >>> A bit of a detour, comming back to the issue why the following node
> >>> was not getting detected by the dma-buf heaps framework.
> >>>
> >>> reserved-memory {
> >>> #address-cells = <2>;
> >>> #size-cells = <2>;
> >>> ranges;
> >>>
> >>> display_reserved: framebuffer@...00000 {
> >>> compatible = "shared-dma-pool";
> >>> linux,cma-default;
> >>> reusable; <<<<<<<<<<<<-----------This was missing in our
> >>> earlier node
> >>> reg = <0 0x60000000 0 0x08000000>;
> >>> };
> >>
> >> Right. It has to be a CMA region for us to expose it from the cma heap.
> >>
> >>
> >>> With 'reusable', rmem_cma_setup() succeeds , but the kernel crashes as follows :-
> >>>
> >>> [ 0.450562] WARNING: CPU: 2 PID: 1 at mm/cma.c:110 cma_init_reserved_areas+0xec/0x22c
> >>
> >> Is the value 0x60000000 you're using something you just guessed at? It
> >> seems like the warning here is saying the pfn calculated from the base
> >> address isn't valid.
> > It is a valid memory region we use to allocate framebuffers.
>
>
> But does it have a valid kernel virtual mapping? Most ARM systems (just
> assuming you are working on ARM :)) that I'm familiar with have the DRAM
> space starting at 0x80000000 and so don't start having valid pfns until
> that point. Is this address you are reserving an SRAM?
>
Yeah, I think you've got it.
This region is DRAM on an FPGA expansion tile, but as you have noticed
its "below" the start of main RAM, and I expect it's not in any of the
declared /memory/ nodes.
When "reusable" isn't there, I think we'll end up going the coherent.c
route, with dma_init_coherent_memory() setting up some pages.
If "reusable" is there, then I think we'll end up in contiguous.c and
that expects us to already have pages.
So, @Ayan, you could perhaps try adding this region as a /memory/ node
as-well, which should mean the kernel sets up some pages for it as
normal memory. But, I have some ancient recollection that the arm64
kernel couldn't handle system RAM at addresses below 0x80000000 or
something. That might be different now, I'm talking about several
years ago.
Thanks,
-Brian
> Andrew
>
>
> >>
> >> thanks
> >> -john
Powered by blists - more mailing lists