lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <dd6b502c66aad1fe34eb0b16a1e44a4ebfd172f1.camel@bootlin.com>
Date:   Fri, 04 May 2018 15:57:48 +0200
From:   Paul Kocialkowski <paul.kocialkowski@...tlin.com>
To:     Maxime Ripard <maxime.ripard@...tlin.com>
Cc:     linux-media@...r.kernel.org, devicetree@...r.kernel.org,
        linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org,
        linux-sunxi@...glegroups.com,
        Mauro Carvalho Chehab <mchehab@...nel.org>,
        Rob Herring <robh+dt@...nel.org>,
        Mark Rutland <mark.rutland@....com>,
        Chen-Yu Tsai <wens@...e.org>, Pawel Osciak <pawel@...iak.com>,
        Marek Szyprowski <m.szyprowski@...sung.com>,
        Kyungmin Park <kyungmin.park@...sung.com>,
        Hans Verkuil <hans.verkuil@...co.com>,
        Sakari Ailus <sakari.ailus@...ux.intel.com>,
        Philipp Zabel <p.zabel@...gutronix.de>,
        Arnd Bergmann <arnd@...db.de>,
        Alexandre Courbot <acourbot@...omium.org>,
        Tomasz Figa <tfiga@...omium.org>
Subject: Re: [PATCH v2 09/10] ARM: dts: sun7i-a20: Add Video Engine and
 reserved memory nodes

On Fri, 2018-05-04 at 15:40 +0200, Maxime Ripard wrote:
> On Fri, May 04, 2018 at 02:04:38PM +0200, Paul Kocialkowski wrote:
> > On Fri, 2018-05-04 at 11:15 +0200, Maxime Ripard wrote:
> > > On Fri, May 04, 2018 at 10:47:44AM +0200, Paul Kocialkowski wrote:
> > > > > > > > +			reg = <0x01c0e000 0x1000>;
> > > > > > > > +			memory-region = <&ve_memory>;
> > > > > > > 
> > > > > > > Since you made the CMA region the default one, you don't
> > > > > > > need
> > > > > > > to
> > > > > > > tie
> > > > > > > it to that device in particular (and you can drop it being
> > > > > > > mandatory
> > > > > > > from your binding as well).
> > > > > > 
> > > > > > What if another driver (or the system) claims memory from
> > > > > > that
> > > > > > zone
> > > > > > and
> > > > > > that the reserved memory ends up not being available for the
> > > > > > VPU
> > > > > > anymore?
> > > > > > 
> > > > > > Acccording to the reserved-memory documentation, the
> > > > > > reusable
> > > > > > property
> > > > > > (that we need for dmabuf) puts a limitation that the device
> > > > > > driver
> > > > > > owning the region must be able to reclaim it back.
> > > > > > 
> > > > > > How does that work out if the CMA region is not tied to a
> > > > > > driver
> > > > > > in
> > > > > > particular?
> > > > > 
> > > > > I'm not sure to get what you're saying. You have the property
> > > > > linux,cma-default in your reserved region, so the behaviour
> > > > > you
> > > > > described is what you explicitly asked for.
> > > > 
> > > > My point is that I don't see how the driver can claim back (part
> > > > of)
> > > > the
> > > > reserved area if the area is not explicitly attached to it.
> > > > 
> > > > Or is that mechanism made in a way that all drivers wishing to
> > > > use
> > > > the
> > > > reserved memory area can claim it back from the system, but
> > > > there is
> > > > no
> > > > priority (other than first-come first-served) for which drivers
> > > > claims
> > > > it back in case two want to use the same reserved region (in a
> > > > scenario
> > > > where there isn't enough memory to allow both drivers)?
> > > 
> > > This is indeed what happens. Reusable is to let the system use the
> > > reserved memory for things like caches that can easily be dropped
> > > when
> > > a driver wants to use the memory in that reserved area. Once that
> > > memory has been allocated, there's no claiming back, unless that
> > > memory segment was freed of course.
> > 
> > Thanks for the clarification. So in our case, perhaps the best fit
> > would
> > be to make that area the default CMA pool so that we can be ensured
> > that
> > the whole 96 MiB is available for the VPU and that no other consumer
> > of
> > CMA will use it?
> 
> The best fit for what use case ? We already discussed this, and I
> don't see any point in having two separate CMA regions. If you have a
> reasonably sized region that will accomodate for both the VPU and
> display engine, why would we want to split them?

The use case I have in mind is boilerplate use of the VPU with the
display engine, say with DMAbuf.

It wasn't exactly clear in my memory whether we had decided that the CMA
pool we use for the VPU should also be used for other CMA consumers (I
realize that this is what we've been doing all along by having
linux,cma-default; though).

The fact that the memory region will accomodate for both the display
engine and the VPU is not straightforward IMO and I think this has to be
an explicit choice that we take. I was under the impression that we
chose the 96 MiB because that's what the Allwinner reference code does.
Does the reference code also use this pool for display?

I liked the idea of having 96 MiB fully reserved to the VPU because it
allows us to provide a limit on the use case, such as "this guarantees N
buffers for resolution foo in format bar". If the display engine also
uses it, then the limit also depends on how many GEM buffers are
allocated (unless I'm missing something).

> Or did you have any experience of running out of buffers?

Not yet, I haven't. I have no objection with making the reserved region
the default CMA pool and have other consumers use it too.

Cheers,

-- 
Paul Kocialkowski, Bootlin (formerly Free Electrons)
Embedded Linux and kernel engineering
https://bootlin.com
Download attachment "signature.asc" of type "application/pgp-signature" (489 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ