[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180504134033.wngpe5scyisreonn@flea>
Date: Fri, 4 May 2018 15:40:33 +0200
From: Maxime Ripard <maxime.ripard@...tlin.com>
To: Paul Kocialkowski <paul.kocialkowski@...tlin.com>
Cc: linux-media@...r.kernel.org, devicetree@...r.kernel.org,
linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org,
linux-sunxi@...glegroups.com,
Mauro Carvalho Chehab <mchehab@...nel.org>,
Rob Herring <robh+dt@...nel.org>,
Mark Rutland <mark.rutland@....com>,
Chen-Yu Tsai <wens@...e.org>, Pawel Osciak <pawel@...iak.com>,
Marek Szyprowski <m.szyprowski@...sung.com>,
Kyungmin Park <kyungmin.park@...sung.com>,
Hans Verkuil <hans.verkuil@...co.com>,
Sakari Ailus <sakari.ailus@...ux.intel.com>,
Philipp Zabel <p.zabel@...gutronix.de>,
Arnd Bergmann <arnd@...db.de>,
Alexandre Courbot <acourbot@...omium.org>,
Tomasz Figa <tfiga@...omium.org>
Subject: Re: [PATCH v2 09/10] ARM: dts: sun7i-a20: Add Video Engine and
reserved memory nodes
On Fri, May 04, 2018 at 02:04:38PM +0200, Paul Kocialkowski wrote:
> On Fri, 2018-05-04 at 11:15 +0200, Maxime Ripard wrote:
> > On Fri, May 04, 2018 at 10:47:44AM +0200, Paul Kocialkowski wrote:
> > > > > > > + reg = <0x01c0e000 0x1000>;
> > > > > > > + memory-region = <&ve_memory>;
> > > > > >
> > > > > > Since you made the CMA region the default one, you don't need
> > > > > > to
> > > > > > tie
> > > > > > it to that device in particular (and you can drop it being
> > > > > > mandatory
> > > > > > from your binding as well).
> > > > >
> > > > > What if another driver (or the system) claims memory from that
> > > > > zone
> > > > > and
> > > > > that the reserved memory ends up not being available for the VPU
> > > > > anymore?
> > > > >
> > > > > Acccording to the reserved-memory documentation, the reusable
> > > > > property
> > > > > (that we need for dmabuf) puts a limitation that the device
> > > > > driver
> > > > > owning the region must be able to reclaim it back.
> > > > >
> > > > > How does that work out if the CMA region is not tied to a driver
> > > > > in
> > > > > particular?
> > > >
> > > > I'm not sure to get what you're saying. You have the property
> > > > linux,cma-default in your reserved region, so the behaviour you
> > > > described is what you explicitly asked for.
> > >
> > > My point is that I don't see how the driver can claim back (part of)
> > > the
> > > reserved area if the area is not explicitly attached to it.
> > >
> > > Or is that mechanism made in a way that all drivers wishing to use
> > > the
> > > reserved memory area can claim it back from the system, but there is
> > > no
> > > priority (other than first-come first-served) for which drivers
> > > claims
> > > it back in case two want to use the same reserved region (in a
> > > scenario
> > > where there isn't enough memory to allow both drivers)?
> >
> > This is indeed what happens. Reusable is to let the system use the
> > reserved memory for things like caches that can easily be dropped when
> > a driver wants to use the memory in that reserved area. Once that
> > memory has been allocated, there's no claiming back, unless that
> > memory segment was freed of course.
>
> Thanks for the clarification. So in our case, perhaps the best fit would
> be to make that area the default CMA pool so that we can be ensured that
> the whole 96 MiB is available for the VPU and that no other consumer of
> CMA will use it?
The best fit for what use case ? We already discussed this, and I
don't see any point in having two separate CMA regions. If you have a
reasonably sized region that will accomodate for both the VPU and
display engine, why would we want to split them?
Or did you have any experience of running out of buffers?
Maxime
--
Maxime Ripard, Bootlin (formerly Free Electrons)
Embedded Linux and Kernel engineering
https://bootlin.com
Download attachment "signature.asc" of type "application/pgp-signature" (834 bytes)
Powered by blists - more mailing lists