[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20211020112511.0b6b93ba@eldfell>
Date: Wed, 20 Oct 2021 11:25:11 +0300
From: Pekka Paalanen <ppaalanen@...il.com>
To: Igor Matheus Andrade Torrente <igormtorrente@...il.com>
Cc: rodrigosiqueiramelo@...il.com, melissa.srw@...il.com,
hamohammed.sa@...il.com, daniel@...ll.ch, airlied@...ux.ie,
contact@...rsion.fr, leandro.ribeiro@...labora.com,
dri-devel@...ts.freedesktop.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 6/6] drm: vkms: Refactor the plane composer to accept
new formats
On Tue, 19 Oct 2021 18:10:57 -0300
Igor Matheus Andrade Torrente <igormtorrente@...il.com> wrote:
> Hi Pekka,
>
> On 10/19/21 5:05 AM, Pekka Paalanen wrote:
> > On Mon, 18 Oct 2021 16:26:06 -0300
> > Igor Matheus Andrade Torrente <igormtorrente@...il.com> wrote:
> >
> >> Hi Pekka,
> >>
> >> On 10/18/21 5:30 AM, Pekka Paalanen wrote:
> >>> On Tue, 5 Oct 2021 17:16:37 -0300
> >>> Igor Matheus Andrade Torrente <igormtorrente@...il.com> wrote:
> >>>
> >>>> Currently the blend function only accepts XRGB_8888 and ARGB_8888
> >>>> as a color input.
> >>>>
> >>>> This patch refactors all the functions related to the plane composition
> >>>> to overcome this limitation.
> >>>>
> >>>> Now the blend function receives a format handler to each plane and a
> >>>> blend function pointer. It will take two ARGB_1616161616 pixels, one
> >>>> for each handler, and will use the blend function to calculate and
> >>>> store the final color in the output buffer.
> >>>>
> >>>> These format handlers will receive the `vkms_composer` and a pair of
> >>>> coordinates. And they should return the respective pixel in the
> >>>> ARGB_16161616 format.
> >>>>
> >>>> The blend function will receive two ARGB_16161616 pixels, x, y, and
> >>>> the vkms_composer of the output buffer. The method should perform the
> >>>> blend operation and store output to the format aforementioned
> >>>> ARGB_16161616.
> >>>
> >>> Hi,
> >>>
> >>> here are some drive-by comments which you are free to take or leave.
> >>>
> >>> To find more efficient ways to do this, you might want research
> >>> library. It's MIT licensed.
> >>>
> >>> IIRC, the elementary operations there operate on pixel lines (pointer +
> >>> length), not individual pixels (image, x, y). Input conversion function
> >>> reads and converts a whole line a time. Blending function blends two
> >>> lines and writes the output into back one of the lines. Output
> >>> conversion function takes a line and converts it into destination
> >>> buffer. That way the elementary operations do not need to take stride
> >>> into account, and blending does not need to deal with varying memory
> >>> alignment FWIW. The inner loops involve much less function calls and
> >>> state, probably.
> >>
> >> I was doing some rudimentary profiling, and I noticed that the code
> >> spends most of the time with the indirect system call overhead and not
> >> the actual computation. This can definitively help with it.
> >
> > Hi,
> >
> > I suppose you mean function (pointer) call, not system call?
>
> Yes, I misspelled it.
>
> >
> > Really good that you have already profiled things. All optimisations
> > should be guided by profiling, otherwise they are just guesses (and I
> > got lucky this time I guess).
> >
> >>> Pixman may not do exactly this, but it does something very similar.
> >>> Pixman also has multiple different levels of optimisations, which may
> >>> not be necessary for VKMS.
> >>>
> >>> It's a trade-off between speed and temporary memory consumed. You need
> >>> temporary buffers for two lines, but not more (not a whole image in
> >>> full intermediate precision). Further optimisation could determine
> >>> whether to process whole image rows as lines, or split a row into
> >>> multiple lines to stay within CPU cache size.
> >>>
> >>
> >> Sorry, I didn't understand the idea of the last sentence.
> >
> > If an image is very wide, a single row could still be relatively large
> > in size (bytes). If it is large enough that the working set falls out
> > of a faster CPU cache into a slower CPU cache (or worse yet, into RAM
> > accesses), performance may suffer and become memory bandwidth limited
> > rather than ALU rate limited. Theoretically that can be worked around
> > by limiting the maximum size of a line, and splitting an image row into
> > multiple lines.
>
> Got it now, thanks.
>
> >
> > However, this is an optimisation one probably should not do until there
> > is performance profiling data showing that it actually is useful. The
> > optimal line size limit depends on the CPU model as well. So it's a bit
> > far out, but something you could keep in mind just in case.
>
> OK. For now I will not deal with this complexity, and if necessary I
> will revisit it latter.
>
> >
> >>> Since it seems you are blending multiple planes into a single
> >>> destination which is assumed to be opaque, you might also be able to do
> >>> the multiple blends without convert-writing and read-converting to/from
> >>> the destination between every plane. I think that might be possible to
> >>> architect on top of the per-line operations still.
> >>
> >> I tried it. But I don't know how to do this without looking like a mess.
>
> I don't think it changes anything, but I forgot to mention that I tried
> it with the blend per pixel and not a per line.
Hi,
I believe that can get messy per-pixel, yeah. You kind of want to keep
the per-pixel (inner loop) operations as fast as possible, and that
naturally tends to lead to contorted code as you try to optimise it
(prematurely perhaps).
I would expect per-line code to be cleaner, because there is less state
to be handled in the inner loop, and the outer loops spin rarely enough
that you can afford to write more clear code.
> > Dedicate one of the temporary line buffers for the destination, and
> > instead of writing it out and loading it back for each input plane,
> > leave it in place over all planes and write it out just once at the end.
> >
> > I do expect more state tracking needed. You need to iterate over the
> > list of planes for each output row, extract only the used part of an
> > input plane's buffer into the other temporary line buffer, and offset
> > the destination line buffer and length to match when passing those into
> > a blending function.+
>
> It's exactly this state tracking that was a mess when I was trying to
> implement something similar. But this is one more thing that probably
> I will have to revisit later.
Mind the difference between state tracking in the inner loop vs. the
outer loops. Per-line, inner loop becomes simpler while the outer loops
become slightly more complicated, but it should be a net win, because
the outer loops spin fewer times.
When nesting gets too deep, code becomes a mess, or the number of local
variables in a function grows too far, it usually helps to split things
into more functions. The compiler will inline them for you
automatically when that is beneficial.
Function pointer calls cannot usually be inlined, hence they should be
kept out of the innermost loop.
> > It's not an obvious win but a trade-off, so profiling is again needed.
> >
> > Btw. the use of temporary line buffers should also help with
> > implementing scaling. You could do the filtering during reading of the
> > input buffer. If the filter is not nearest, meaning you need to access
> > more than one input pixel per pixel-for-blending, there are a few ways
> > to go about that. You could do the filtering during the input buffer
> > reading, or you could load two input buffer rows into temporary line
> > buffers and do filtering as a separate step into yet another line
> > buffer. As the composition advances from top to bottom, only one of the
> > input buffer rows will change at a time (during up-scaling) so you can
> > avoid re-loading a row by swapping the line buffers.
> >
> > This is getting ahead of things, so don't worry about scaling or
> > filtering yet. The first thing is to see if you can make the line
> > buffer approach give you a significant speed-up.
> >
> >> Does the pixman perform some similar?
> >
> > No, Pixman composition operation has only three images: source,
> > mask, and destination. All those it can handle simultaneously, but
> > since there is no "multi-blending" API, it doesn't need to take care of
> > more.
> >
> > IIRC, Pixman also has a form of optimised operations that do blending
> > and converting to destination in the same pass. The advantage of that
> > is that blending can work with less precision when it knows what
> > precision the output will be converted to and it saves some bandwidth
> > by not needing to write-after-blending and read-for-conversion
> > intermediate precision values. The disadvantage is that the number of
> > needed specialised blending functions explodes by the number of
> > possible destination formats. Pixman indeed makes those specialised
> > functions optional, falling back to more generic C code. I would hope
> > that VKMS does not need to go this far in optimisations though.
>
> This should be plenty fast indeed. Maybe worth for formats that are
> extremely common.
My feeling says that it would be better to not aim that far at first,
and see how the per-line approach works with separate pluggable
read-conversion, blending, and write-conversion functions.
Besides, combined blend-write operations would probably conflict with
the idea of blending all planes first and then writing destination once.
There are KMS properties for different blending functions, so that's a
reason to make the blending function pluggable as well. Not just for a
potential optimisation to skip the X channel on input completely.
If you want to support background color property, that might be handled
with a special read-conversion function which just fills the temporary
line buffer with the background color.
It is always possible to get faster by writing more specialised
functions that can take advantage of more invariants/assumptions, but
you get a combinatorial explosion of specialisations.
Thanks,
pq
Content of type "application/pgp-signature" skipped
Powered by blists - more mailing lists