[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <dd2fykdes4upolbxbdn2d56mcnbmn34cvchpuivczbrmuntoif@uv437quy7tqz>
Date: Fri, 12 Jul 2024 12:37:11 +0200
From: Thierry Reding <thierry.reding@...il.com>
To: Maxime Ripard <mripard@...nel.org>
Cc: John Stultz <jstultz@...gle.com>, Rob Herring <robh@...nel.org>,
Saravana Kannan <saravanak@...gle.com>, Sumit Semwal <sumit.semwal@...aro.org>,
Benjamin Gaignard <benjamin.gaignard@...labora.com>, Brian Starkey <Brian.Starkey@....com>,
"T.J. Mercier" <tjmercier@...gle.com>, Christian König <christian.koenig@....com>,
Mattijs Korpershoek <mkorpershoek@...libre.com>, devicetree@...r.kernel.org, linux-kernel@...r.kernel.org,
linux-media@...r.kernel.org, dri-devel@...ts.freedesktop.org, linaro-mm-sig@...ts.linaro.org
Subject: Re: [PATCH 0/8] dma-buf: heaps: Support carved-out heaps and ECC
related-flags
On Wed, Jul 10, 2024 at 02:10:09PM GMT, Maxime Ripard wrote:
> On Fri, Jul 05, 2024 at 04:31:34PM GMT, Thierry Reding wrote:
> > On Thu, Jul 04, 2024 at 02:24:49PM GMT, Maxime Ripard wrote:
> > > On Fri, Jun 28, 2024 at 04:42:35PM GMT, Thierry Reding wrote:
> > > > On Fri, Jun 28, 2024 at 03:08:46PM GMT, Maxime Ripard wrote:
> > > > > Hi,
> > > > >
> > > > > On Fri, Jun 28, 2024 at 01:29:17PM GMT, Thierry Reding wrote:
> > > > > > On Tue, May 21, 2024 at 02:06:19PM GMT, Daniel Vetter wrote:
> > > > > > > On Thu, May 16, 2024 at 09:51:35AM -0700, John Stultz wrote:
> > > > > > > > On Thu, May 16, 2024 at 3:56 AM Daniel Vetter <daniel@...ll.ch> wrote:
> > > > > > > > > On Wed, May 15, 2024 at 11:42:58AM -0700, John Stultz wrote:
> > > > > > > > > > But it makes me a little nervous to add a new generic allocation flag
> > > > > > > > > > for a feature most hardware doesn't support (yet, at least). So it's
> > > > > > > > > > hard to weigh how common the actual usage will be across all the
> > > > > > > > > > heaps.
> > > > > > > > > >
> > > > > > > > > > I apologize as my worry is mostly born out of seeing vendors really
> > > > > > > > > > push opaque feature flags in their old ion heaps, so in providing a
> > > > > > > > > > flags argument, it was mostly intended as an escape hatch for
> > > > > > > > > > obviously common attributes. So having the first be something that
> > > > > > > > > > seems reasonable, but isn't actually that common makes me fret some.
> > > > > > > > > >
> > > > > > > > > > So again, not an objection, just something for folks to stew on to
> > > > > > > > > > make sure this is really the right approach.
> > > > > > > > >
> > > > > > > > > Another good reason to go with full heap names instead of opaque flags on
> > > > > > > > > existing heaps is that with the former we can use symlinks in sysfs to
> > > > > > > > > specify heaps, with the latter we need a new idea. We haven't yet gotten
> > > > > > > > > around to implement this anywhere, but it's been in the dma-buf/heap todo
> > > > > > > > > since forever, and I like it as a design approach. So would be a good idea
> > > > > > > > > to not toss it. With that display would have symlinks to cma-ecc and cma,
> > > > > > > > > and rendering maybe cma-ecc, shmem, cma heaps (in priority order) for a
> > > > > > > > > SoC where the display needs contig memory for scanout.
> > > > > > > >
> > > > > > > > So indeed that is a good point to keep in mind, but I also think it
> > > > > > > > might re-inforce the choice of having ECC as a flag here.
> > > > > > > >
> > > > > > > > Since my understanding of the sysfs symlinks to heaps idea is about
> > > > > > > > being able to figure out a common heap from a collection of devices,
> > > > > > > > it's really about the ability for the driver to access the type of
> > > > > > > > memory. If ECC is just an attribute of the type of memory (as in this
> > > > > > > > patch series), it being on or off won't necessarily affect
> > > > > > > > compatibility of the buffer with the device. Similarly "uncached"
> > > > > > > > seems more of an attribute of memory type and not a type itself.
> > > > > > > > Hardware that can access non-contiguous "system" buffers can access
> > > > > > > > uncached system buffers.
> > > > > > >
> > > > > > > Yeah, but in graphics there's a wide band where "shit performance" is
> > > > > > > defacto "not useable (as intended at least)".
> > > > > > >
> > > > > > > So if we limit the symlink idea to just making sure zero-copy access is
> > > > > > > possible, then we might not actually solve the real world problem we need
> > > > > > > to solve. And so the symlinks become somewhat useless, and we need to
> > > > > > > somewhere encode which flags you need to use with each symlink.
> > > > > > >
> > > > > > > But I also see the argument that there's a bit a combinatorial explosion
> > > > > > > possible. So I guess the question is where we want to handle it ...
> > > > > >
> > > > > > Sorry for jumping into this discussion so late. But are we really
> > > > > > concerned about this combinatorial explosion in practice? It may be
> > > > > > theoretically possible to create any combination of these, but do we
> > > > > > expect more than a couple of heaps to exist in any given system?
> > > > >
> > > > > I don't worry too much about the number of heaps available in a given
> > > > > system, it would indeed be fairly low.
> > > > >
> > > > > My concern is about the semantics combinatorial explosion. So far, the
> > > > > name has carried what semantics we were supposed to get from the buffer
> > > > > we allocate from that heap.
> > > > >
> > > > > The more variations and concepts we'll have, the more heap names we'll
> > > > > need, and with confusing names since we wouldn't be able to change the
> > > > > names of the heaps we already have.
> > > >
> > > > What I was trying to say is that none of this matters if we make these
> > > > names opaque. If these names are contextual for the given system it
> > > > doesn't matter what the exact capabilities are. It only matters that
> > > > their purpose is known and that's what applications will be interested
> > > > in.
> > >
> > > If the names are opaque, and we don't publish what the exact
> > > capabilities are, how can an application figure out which heap to use in
> > > the first place?
> >
> > This would need to be based on conventions. The idea is to standardize
> > on a set of names for specific, well-known use-cases.
Sorry, hadn't seen all of your comments in this mail before, a few more
notes below.
> How can undocumented, unenforced, conventions can work in practice?
Unenforced, perhaps, yes, but who says that these conventions need to
be undocumented?
> > > > > > Would it perhaps make more sense to let a platform override the heap
> > > > > > name to make it more easily identifiable? Maybe this is a naive
> > > > > > assumption, but aren't userspace applications and drivers not primarily
> > > > > > interested in the "type" of heap rather than whatever specific flags
> > > > > > have been set for it?
> > > > >
> > > > > I guess it depends on what you call the type of a heap. Where we
> > > > > allocate the memory from, sure, an application won't care about that.
> > > > > How the buffer behaves on the other end is definitely something
> > > > > applications are going to be interested in though.
> > > >
> > > > Most of these heaps will be very specific, I would assume.
> > >
> > > We don't have any specific heap upstream at the moment, only generic
> > > ones.
> >
> > But we're trying to add more specific ones, right?
> >
> > > > For example a heap that is meant to be protected for protected video
> > > > decoding is both going to be created in such a way as to allow that
> > > > use-case (i.e. it doesn't make sense for it to be uncached, for
> > > > example) and it's also not going to be useful for any other use-case
> > > > (i.e. there's no reason to use that heap for GPU jobs or networking,
> > > > or whatever).
> > >
> > > Right. But also, libcamera has started to use dma-heaps to allocate
> > > dma-capable buffers and do software processing on it before sending it
> > > to some hardware controller.
> > >
> > > Caches are critical here, and getting a non-cacheable buffer would be
> > > a clear regression.
> >
> > I understand that. My point is that maybe we shouldn't try to design a
> > complex mechanism that allows full discoverability of everything that a
> > heap supports or is capable of. Instead if the camera has specific
> > requirements, it could look for a heap named "camera". Or if it can
> > share a heap with other multimedia devices, maybe call the heap
> > "multimedia".
>
> That kind of vague categorization is pointless though. Some criteria are
> about hardwar (ie, can the device access it in the first place?), so is
> purely about a particular context and policy and will change from one
> application to the other.
>
> A camera app using an ISP will not care about caches. A software
> rendering library will. A compositor will not want ECC. A safety
> component probably will.
>
> All of them are "multimedia".
>
> We *need* to be able to differentiate policy from hardware requirements.
Do we really? My point is that if we have, say, a safety component that
needs hardware and software to access certain memory, then by definition
that memory needs to properties that satisfy both the hardware *and* the
software components involved with that memory. Otherwise it's all just
not going to work.
If you have an ISP that never needs to pass the buffer to software for
post-processing or whatever, then there's hardly a need for that buffer
to be cached. On the other hand, if the system requires software post-
processing, I bet you that the system will be designed such that the ISP
and software can efficiently access that particular shared memory region
or else, again, the system won't work.
Given that these are special purpose carveout regions, I have a hard
time imagining somebody creating arbitrary heaps just for the sake of
it.
> > The idea is that heaps for these use-cases are quite specific, so you
> > would likely not find an arbitrary number of processes try to use the
> > same heap.
>
> Some of them are specific, some of them aren't.
Which ones wouldn't be specific? Of course I can /think/ of arbitrarily
generic heaps, but the real question is whether we are going to
encounter these in practice.
> > > How can it know which heap to allocate from on a given platform?
> > >
> > > Similarly with the ECC support we started that discussion with. ECC will
> > > introduce a significant performance cost. How can a generic application,
> > > such as a compositor, will know which heap to allocate from without:
> > >
> > > a) Trying to bundle up a list of heaps for each platform it might or
> > > might not run
> > >
> > > b) and handling the name difference between BSPs and mainline.
> >
> > Obviously some standardization of heap names is a requirement here,
> > otherwise such a proposal does indeed not make sense.
> >
> > > If some hardware-specific applications / middleware want to take a
> > > shortcut and use the name, that's fine. But we need to find a way for
> > > generic applications to discover which heap is best suited for their
> > > needs without the name.
> >
> > You can still have fairly generic names for heaps. If you want protected
> > content, you could try to use a standard "video-protected" heap. If you
> > need ECC protected memory, maybe you want to allocate from a heap named
> > "safety", or whatever.
>
> And if I need cacheable, physically contiguous, "multimedia" buffers from
> ECC protected memory?
Again, I think you're trying to design for a very theoretically generic
use-case that doesn't exist.
Note also that I'm not necessarily talking about global names here, but
if necessary these could be per-device or per-use-case. If you have ECC
protected memory that you may want to use in certain cases, you could
call this "safety" *in the context* of "multimedia". So you could
associate multiple multimedia heaps with a video encoder. One could be
used if only plain physically contiguous memory is needed, and another
would be used if ECC protection is needed.
These two heaps could be different from regular and safety heaps of a
camera, for example.
So even if we have a fairly large number of heaps globally, I expect
the number of heaps per-use-case to be very small (and easily named).
> > > > > And if we allow any platform to change a given heap name, then a generic
> > > > > application won't be able to support that without some kind of
> > > > > platform-specific configuration.
> > > >
> > > > We could still standardize on common use-cases so that applications
> > > > would know what heaps to allocate from. But there's also no need to
> > > > arbitrarily restrict this. For example there could be cases that are
> > > > very specific to a particular platform and which just doesn't exist
> > > > anywhere else. Platform designers could then still use this mechanism to
> > > > define that very particular heap and have a very specialized userspace
> > > > application use that heap for their purpose.
> > >
> > > We could just add a different capabitily flag to make sure those would
> > > get ignored.
> >
> > Sure you can do all of this with a myriad of flags. But again, I'm
> > trying to argue that we may not need this additional complexity. In a
> > typical system, how many heaps do you encounter? You may need a generic
> > one and then perhaps a handful specific ones? Or do you need more?
>
> It's not a matter of the number of heaps, but what they provide.
It sounds like you want to design a system that allows any arbitrary
number of carveouts to be defined, each with its own unique combination
of capabilities. I'm afraid that's going to be overly complex and end up
in a system that is very difficult to use. If I recall correctly there
have been attempts to do something like this is the past (GBM allocator)
and they didn't really go anywhere.
Ultimately I think we need to find the practical applications for this
and then base the design on what the real world requirements are.
> > > > > > For example, if an applications wants to use a protected buffer, the
> > > > > > application doesn't (and shouldn't need to) care about whether the heap
> > > > > > for that buffer supports ECC or is backed by CMA. All it really needs to
> > > > > > know is that it's the system's "protected" heap.
> > > > >
> > > > > I mean... "protected" very much means backed by CMA already, it's pretty
> > > > > much the only thing we document, and we call it as such in Kconfig.
> > > >
> > > > Well, CMA is really just an implementation detail, right? It doesn't
> > > > make sense to advertise that to anything outside the kernel. Maybe it's
> > > > an interesting fact that buffers allocated from these heaps will be
> > > > physically contiguous?
> > >
> > > CMA itself might be an implementation detail, but it's still right there
> > > in the name on ARM.
> >
> > That doesn't mean we can do something more useful going forward (and
> > perhaps symlink for backwards-compatibility if needed).
> >
> > > And being able to get physically contiguous buffers is critical on
> > > platforms without an IOMMU.
> >
> > Again, I'm not trying to dispute the necessity of contiguous buffers.
> > I'm trying to say that contextual names can be a viable alternative to
> > full discoverability. If you want contiguous buffers, go call the heap
> > "contiguous" and it's quite clear what it means.
> >
> > You can even hide details such as IOMMU availability from userspace that
> > way. On a system where an IOMMU is present, you could for example go and
> > use IOMMU-backed memory in a "contiguous" heap, while on a system
> > without an IOMMU the memory for the "contiguous" heap could come from
> > CMA.
>
> I can see the benefits from that, and it would be quite nice indeed.
> However, it still only addresses the "hardware" part of the requirements
> (ie, is it contiguous, accessible, etc.). It doesn't address
> applications having different requirements when it comes to what kind of
> attributes they'd like/need to get from the buffer.
>
> If one application in the system wants contiguous (using your definition
> just above) buffers without caches, and the other wants to have
> contiguous cacheable buffers, if we're only using the name we'd need to
> instantiate two heaps, from the same allocator, for what's essentially a
> mapping attribute.
This sounds very hypothetical to me. Maybe we have a fundamentally
different view of what these heaps are supposed to be, but in my view
they are very specific regions of memory that serve a special purpose,
so they are very unlikely going to need a lot of flexibility. If one
application is going to require uncached buffers, then any application
is likely going to require uncached buffers for that particular use-
case. In fact, I'd say there's probably only one application using the
functionality in the first place.
Again, I realize that I may have a very limited picture of what is
needed for existing use-cases, so maybe we can start collecting some
data about real-world use-cases for these carveouts to get a better
understanding of what we need?
> It's more complex for the kernel, more code to maintain, and more
> complex for applications too because they need to know about what a
> given name means for that particular context.
I don't think it will be very complex or a lot of code to make this
name-based. In fact I expect it to become quite simple. There's going to
have to be some (generic) code that knows how to link carveouts to the
devices that use them, but the rest should be pretty straightforward.
As for applications, isn't it going to be much easier to request a heap
allocation "by name" rather than having to discover all heaps and
determining the best one?
Thierry
Download attachment "signature.asc" of type "application/pgp-signature" (834 bytes)
Powered by blists - more mailing lists