[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190725143335.GB21894@infradead.org>
Date: Thu, 25 Jul 2019 07:33:35 -0700
From: Christoph Hellwig <hch@...radead.org>
To: Benjamin Gaignard <benjamin.gaignard@...aro.org>
Cc: Christoph Hellwig <hch@...radead.org>,
John Stultz <john.stultz@...aro.org>,
lkml <linux-kernel@...r.kernel.org>,
Laura Abbott <labbott@...hat.com>,
Sumit Semwal <sumit.semwal@...aro.org>,
Liam Mark <lmark@...eaurora.org>,
Pratik Patel <pratikp@...eaurora.org>,
Brian Starkey <Brian.Starkey@....com>,
Vincent Donnefort <Vincent.Donnefort@....com>,
Sudipto Paul <Sudipto.Paul@....com>,
"Andrew F . Davis" <afd@...com>,
Xu YiPing <xuyiping@...ilicon.com>,
"Chenfeng (puck)" <puck.chen@...ilicon.com>,
butao <butao@...ilicon.com>,
"Xiaqing (A)" <saberlily.xia@...ilicon.com>,
Yudongbin <yudongbin@...ilicon.com>,
Chenbo Feng <fengc@...gle.com>,
Alistair Strachan <astrachan@...gle.com>,
dri-devel <dri-devel@...ts.freedesktop.org>
Subject: Re: [PATCH v6 4/5] dma-buf: heaps: Add CMA heap to dmabuf heaps
On Thu, Jul 25, 2019 at 03:20:11PM +0200, Benjamin Gaignard wrote:
> > But that just means we need a flag that memory needs to be contiguous,
> > which totally makes sense at the API level. But CMA is not the only
> > source of contigous memory, so we should not conflate the two.
>
> We have one file descriptor per heap to be able to add access control
> on each heap.
> That wasn't possible with ION because the heap was selected given the
> flags in ioctl
> structure and we can't do access control based on that. If we put flag
> to select the
> allocation mechanism (system, CMA, other) in ioctl we come back to ION status.
> For me one allocation mechanism = one heap.
Well, I agree with your split for different fundamentally different
allocators. But the point is that CMA (at least the system cma area)
fundamentally isn't a different allocator. The per-device CMA area
still are kinda the same, but you can just have one fd for each
per-device CMA area to make your life simple.
Powered by blists - more mailing lists