[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <c15805b0-261b-114a-c29d-b63f00dd8da4@synaptics.com>
Date: Wed, 26 Jul 2023 10:49:29 +0800
From: Hsia-Jun Li <Randy.Li@...aptics.com>
To: Paul Kocialkowski <paul.kocialkowski@...tlin.com>
Cc: linux-kernel@...r.kernel.org,
Nicolas Dufresne <nicolas.dufresne@...labora.com>,
linux-media@...r.kernel.org, Hans Verkuil <hverkuil@...all.nl>,
Sakari Ailus <sakari.ailus@....fi>,
Andrzej Pietrasiewicz <andrzej.p@...labora.com>,
Michael Tretter <m.tretter@...gutronix.de>,
Jernej Škrabec <jernej.skrabec@...il.com>,
Chen-Yu Tsai <wens@...e.org>,
Samuel Holland <samuel@...lland.org>,
Thomas Petazzoni <thomas.petazzoni@...tlin.com>
Subject: Re: Stateless Encoding uAPI Discussion and Proposal
On 7/25/23 20:15, Paul Kocialkowski wrote:
> Subject:
> Re: Stateless Encoding uAPI Discussion and Proposal
> From:
> Paul Kocialkowski <paul.kocialkowski@...tlin.com>
> Date:
> 7/25/23, 20:15
>
> To:
> Hsia-Jun Li <Randy.Li@...aptics.com>
> CC:
> linux-kernel@...r.kernel.org, Nicolas Dufresne
> <nicolas.dufresne@...labora.com>, linux-media@...r.kernel.org, Hans
> Verkuil <hverkuil@...all.nl>, Sakari Ailus <sakari.ailus@....fi>,
> Andrzej Pietrasiewicz <andrzej.p@...labora.com>, Michael Tretter
> <m.tretter@...gutronix.de>, Jernej Škrabec <jernej.skrabec@...il.com>,
> Chen-Yu Tsai <wens@...e.org>, Samuel Holland <samuel@...lland.org>,
> Thomas Petazzoni <thomas.petazzoni@...tlin.com>
>
>
> Hey,
>
> Long time, good to see you are still around and interested in these topics 😄
>
> On Tue 25 Jul 23, 11:33, Hsia-Jun Li wrote:
>> On 7/12/23 22:07, Paul Kocialkowski wrote:
>>> Hi Nicolas,
>>>
>>> Thanks for the quick reply!
>>>
>>> On Tue 11 Jul 23, 14:18, Nicolas Dufresne wrote:
>>>> Le mardi 11 juillet 2023 à 19:12 +0200, Paul Kocialkowski a écrit :
>>>>> Hi everyone!
>>>>>
>>>>> After various discussions following Andrzej's talk at EOSS, feedback from the
>>>>> Media Summit (which I could not attend unfortunately) and various direct
>>>>> discussions, I have compiled some thoughts and ideas about stateless encoders
>>>>> support with various proposals. This is the result of a few years of interest
>>>>> in the topic, after working on a PoC for the Hantro H1 using the hantro driver,
>>>>> which turned out to have numerous design issues.
>>>>>
>>>>> I am now working on a H.264 encoder driver for Allwinner platforms (currently
>>>>> focusing on the V3/V3s), which already provides some usable bitstream and will
>>>>> be published soon.
>>>>>
>>>>> This is a very long email where I've tried to split things into distinct topics
>>>>> and explain a few concepts to make sure everyone is on the same page.
>>>>>
>>>>> # Bitstream Headers
>>>>>
>>>>> Stateless encoders typically do not generate all the bitstream headers and
>>>>> sometimes no header at all (e.g. Allwinner encoder does not even produce slice
>>>>> headers). There's often some hardware block that makes bit-level writing to the
>>>>> destination buffer easier (deals with alignment, etc).
>>>>>
>>>>> The values of the bitstream headers must be in line with how the compressed
>>>>> data bitstream is generated and generally follow the codec specification.
>>>>> Some encoders might allow configuring all the fields found in the headers,
>>>>> others may only allow configuring a few or have specific constraints regarding
>>>>> which values are allowed.
>>>>>
>>>>> As a result, we cannot expect that any given encoder is able to produce frames
>>>>> for any set of headers. Reporting related constraints and limitations (beyond
>>>>> profile/level) seems quite difficult and error-prone.
>>>>>
>>>>> So it seems that keeping header generation in-kernel only (close to where the
>>>>> hardware is actually configured) is the safest approach.
>>>> This seems to match with what happened with the Hantro VP8 proof of concept. The
>>>> encoder does not produce the frame header, but also, it produces 2 encoded
>>>> buffers which cannot be made contiguous at the hardware level. This notion of
>>>> plane in coded data wasn't something that blended well with the rest of the API
>>>> and we didn't want to copy in the kernel while the userspace would also be
>>>> forced to copy to align the headers. Our conclusion was that it was best to
>>>> generate the headers and copy both segment before delivering to userspace. I
>>>> suspect this type of situation will be quite common.
>>> Makes sense! I guess the same will need to be done for Hantro H1 H.264 encoding
>>> (in my PoC the software-generated headers were crafted in userspace and didn't
>>> have to be part of the same buffer as the coded data).
>> We just need a method to indicate where the hardware could write its slice
>> data or compressed frame.
>> While we would decided which frame that the current frame should refer, the
>> (some) hardware may discard our decision, which reference picture set would
>> use less bits. Unless the codec supports a fill up method, it could lead to
>> a gap between header and frame data.
> I think I would need a bit more context to understand this case, especially
> what the hardware could decide to discard.
>
I known the Hantro can't do this. While such design is not unusal. HW
could tell we can't CU that we could do inter predict from one previous
reconstruction frame, then it is not necessary to have it in its RPS.
> My understanding is that the VP8 encoder needs to write part of the header
> separately from the coded data and uses distinct address registers for the two.
I don't think Hantro H1 would do that.
> So the approach is to move the hw-generated headers and coded data before
> returning to userspace.
>
>>>>> # Codec Features
>>>>>
>>>>> Codecs have many variable features that can be enabled or not and specific
>>>>> configuration fields that can take various values. There is usually some
>>>>> top-level indication of profile/level that restricts what can be used.
>>>>>
>>>>> This is a very similar situation to stateful encoding, where codec-specific
>>>>> controls are used to report and set profile/level and configure these aspects.
>>>>> A particularly nice thing about it is that we can reuse these existing controls
>>>>> and add new ones in the future for features that are not yet covered.
>>>>>
>>>>> This approach feels more flexible than designing new structures with a selected
>>>>> set of parameters (that could match the existing controls) for each codec.
>>>> Though, reading more into this emails, we still have a fair amount of controls
>>>> to design and add, probably some compound controls too ?
>>> Yeah definitely. My point here is merely that we should reuse existing control
>>> for general codec features, but I don't think we'll get around introducing new
>>> ones for stateless-specific parts.
>>>
>> Things likes profile, level or tiers could be reused. It make no sense to
>> expose those vendor special feature.
>> Besides, profile, level or tiers are usually stored in the sequence header
>> or uncompressed header, hardware doesn't care about that.
>>
>> I think we should go with the vendor registers buffer way that I always
>> said. There are many encoding tools that a codec offer, variants of hardware
>> may not support or use them all. The context switching between userspace and
>> kernel would drive you mad for so many controls.
> I am strongly against this approach, instead I think we need to keep all
> vendor-specific parts in the kernel driver and provide a clean unified userspace
> API.
>
We are driving away vendor participation. Besides, the current design is
a performance bottleneck.
> Also I think V4L2 has way to set multiple controls at once, so the
> userspace/kernel context switching is rather minimal and within reasonable
> expectations. Of course it will never be as efficient as userspace mapping the
> hardware registers in virtual memory but there are so many problems with this
> approach that it's really not worth it.
>
I am not talking about mapping the register to the userspace.
The userspace would generate a register set for the current frame, while
the kernel should fill that register set with buffer address and trigger
the hardware to apply the register.
Generating a register set from control or even fill partial slice header
cost many resources.
And what we try to define may not fit for real hardware design, you
could only cover the most of hardwares would require but not vendor
didn't have to follow that. Besides, codec spec could be updated even
after it has been released for a while.
>>>>> # Reference and Reconstruction Management
>>>>>
>>>>> With stateless encoding, we need to tell the hardware which frames need to be
>>>>> used as references for encoding the current frame and make sure we have the
>>>>> these references available as decoded frames in memory.
>>>>>
>>>>> Regardless of references, stateless encoders typically need some memory space to
>>>>> write the decoded (known as reconstructed) frame while it's being encoded.
>>>>>
>>>>> One question here is how many slots for decoded pictures should be allocated
>>>>> by the driver when starting to stream. There is usually a maximum number of
>>>>> reference frames that can be used at a time, although perhaps there is a use
>>>>> case to keeping more around and alternative between them for future references.
>>>>>
>>>>> Another question is how the driver should keep track of which frame will be used
>>>>> as a reference in the future and which one can be evicted from the pool of
>>>>> decoded pictures if it's not going to be used anymore.
>>>>>
>>>>> A restrictive approach would be to let the driver alone manage that, similarly
>>>>> to how stateful encoders behave. However it might provide extra flexibility
>>>>> (and memory gain) to allow userspace to configure the maximum number of possible
>>>>> reference frames. In that case it becomes necessary to indicate if a given
>>>>> frame will be used as a reference in the future (maybe using a buffer flag)
>>>>> and to indicate which previous reference frames (probably to be identified with
>>>>> the matching output buffer's timestamp) should be used for the current encode.
>>>>> This could be done with a new dedicated control (as a variable-sized array of
>>>>> timestamps). Note that userspace would have to update it for every frame or the
>>>>> reference frames will remain the same for future encodes.
>>>>>
>>>>> The driver will then make sure to keep the reconstructed buffer around, in one
>>>>> of the slots. When there's no slot left, the driver will drop the oldest
>>>>> reference it has (maybe with a bounce buffer to still allow it to be used as a
>>>>> reference for the current encode).
>>>>>
>>>>> With this behavior defined in the uAPI spec, userspace will also be able to
>>>>> keep track of which previous frame is no longer allowed as a reference.
>>>> If we want, we could mirror the stateless decoders here. During the decoding, we
>>>> pass a "dpb" or a reference list, which represent all the active references.
>>>> These do not have to be used by the current frame, but the driver is allowed to
>>>> use this list to cleanup and free unused memory (or reuse in case it has a fixed
>>>> slot model, like mtk vcodec).
>>>>
>>>> On top of this, we add a list of references to be used for producing the current
>>>> frame. Usually, the picture references are indices into the dpb/reference list
>>>> of timestamp. This makes validation easier. We'll have to define how many
>>>> reference can be used I think since unlike decoders, encoders don't have to
>>>> fully implement levels and profiles.
>>> So that would be a very explicit description instead of expecting drivers to
>>> do the maintainance and userspace to figure out which frame was evicted from
>>> the list. So yeah this feels more robust!
>>>
>>> Regarding the number of reference frames, I think we need to specify both
>>> how many references can be used at a time (number of hardware slots) and how
>>> many total references can be in the reference list (number of rec buffers to
>>> keep around).
>>>
>>> We could also decide that making the current frame part of the global reference
>>> list is a way to indicate that its reconstruction buffer must be kept around,
>>> or we could have a separate way to indicate that. I lean towards the former
>>> since it would put all reference-related things in one place and avoid coming
>>> up with a new buffer flag or such.
>>>
>>> Also we would probably still need to do some validation driver-side to make
>>> sure that userspace doesn't put references in the list that were not marked
>>> as such when encoded (and for which the reconstruction buffer may have been
>>> recycled already).
>>>
>> DPB is the only thing we need to decide any API here under the vendor
>> registers buffer way. We need the driver to translate the buffer reference
>> to the address that hardware could use and in its right registers.
>>
>> The major problem is how to export the reconstruction buffer which was
>> hidden for many years.
>> This could be disccused in the other thread like V4L2 ext buffer api.
> Following my previous point, I am also strongly against exposing the
> reconstruction buffer to userspace.
>
Android hates peoploe allocating a huge of memory without
userspace's(Android's core system) awareness.
Whether a reconstruction frame would be used for long term reference(or
golden frame) is complete up to userspace decision. For example, when we
encoding a part of SVC layer 1, we may not reference to a frame in layer
0, should we let the hardware discard that? Later, we may decide to
refernce it again.
Besides, I don't like the timetstamp way to refer a buffer here, one
input graphics buffer could produce multiple reconstruction buffer(with
different coding options), which is common for SVC case.
>>>>> # Frame Types
>>>>>
>>>>> Stateless encoder drivers will typically instruct the hardware to encode either
>>>>> an intra-coded or an inter-coded frame. While a stream composed only of a single
>>>>> intra-coded frame followed by only inter-coded frames is possible, it's
>>>>> generally not desirable as it is not very robust against data loss and makes
>>>>> seeking difficult.
>>>> Let's avoid this generalization in our document and design. In RTP streaming,
>>>> like WebRTP or SIP, it is desirable to use open GOP (with nothing else then P
>>>> frames all the time, except the very first one). The FORCE_KEY_FRAME is meant to
>>>> allow handling RTP PLI (and other similar feedback). Its quite rare an
>>>> application would mix close GOP and FORCE_KEY_FRAME, but its allowed though.
>>>> What I've seen the most, is the FORCE_KEY_FRAME would just start a new GOP,
>>>> following size and period from this new point.
>>> Okay fair enough, thanks for the details!
>>>
>>>>> As a result, the frame type is usually decided based on a given GOP size
>>>>> (the frequency at which a new intra-coded frame is produced) while intra-coded
>>>>> frames can be explicitly requested upon request. Stateful encoders implement
>>>>> these through dedicated controls:
>>>>> - V4L2_CID_MPEG_VIDEO_FORCE_KEY_FRAME
>>>>> - V4L2_CID_MPEG_VIDEO_GOP_SIZE
>>>>> - V4L2_CID_MPEG_VIDEO_H264_I_PERIOD
>>>>>
>>>>> It seems that reusing them would be possible, which would let the driver decide
>>>>> of the particular frame type.
>>>>>
>>>>> However it makes the reference frame management a bit trickier since reference
>>>>> frames might be requested from userspace for a frame that ends up being
>>>>> intra-coded. We can either allow this and silently ignore the info or expect
>>>>> that userspace keeps track of the GOP index and not send references on the first
>>>>> frame.
>>>>>
>>>>> In some codecs, there's also a notion of barrier key-frames (IDR frames in
>>>>> H.264) that strictly forbid using any past reference beyond the frame.
>>>>> There seems to be an assumption that the GOP start uses this kind of frame
>>>>> (and not any intra-coded frame), while the force key frame control does not
>>>>> particularly specify it.
>>>>>
>>>>> In that case we should flush the list of references and userspace should no
>>>>> longer provide references to them for future frames. This puts a requirement on
>>>>> userspace to keep track of GOP start in order to know when to flush its
>>>>> reference list. It could also check if V4L2_BUF_FLAG_KEYFRAME is set, but this
>>>>> could also indicate a general intra-coded frame that is not a barrier.
>>>>>
>>>>> So another possibility would be for userspace to explicitly indicate which
>>>>> frame type to use (in a codec-specific way) and act accordingly, leaving any
>>>>> notion of GOP up to userspace. I feel like this might be the easiest approach
>>>>> while giving an extra degree of control to userspace.
>>>> I also lean toward this approach ...
>>>>
>>>>> # Rate Control
>>>>>
>>>>> Another important feature of encoders is the ability to control the amount of
>>>>> data produced following different rate control strategies. Stateful encoders
>>>>> typically do this in-firmware and expose controls for selecting the strategy
>>>>> and associated targets.
>>>>>
>>>>> It seems desirable to support both automatic and manual rate-control to
>>>>> userspace.
>>>>>
>>>>> Automatic control would be implemented kernel-side (with algos possibly shared
>>>>> across drivers) and reuse existing stateful controls. The advantage is
>>>>> simplicity (userspace does not need to carry its own rate-control
>>>>> implementation) and to ensure that there is a built-in mechanism for common
>>>>> strategies available for every driver (no mandatory dependency on a proprietary
>>>>> userspace stack). There may also be extra statistics or controls available to
>>>>> the driver that allow finer-grain control.
>>>> Though not controlling the GOP (or no gop) might require a bit more work on
>>>> driver side. Today, we do have queues of request, queues of buffer etc. But it
>>>> is still quite difficult to do lookahead these queues. That is only useful if
>>>> rate control algorithm can use future frame type (like keyframe) to make
>>>> decisions. That could be me pushing to far here though.
>>> Yes I agree the interaction between userspace GOP control and kernel-side
>>> rate-contrly might be quite tricky without any indication of what the next frame
>>> types will be.
>>>
>>> Maybe we could only allow explicit frame type configuration when using manual
>>> rate-control and have kernel-side GOP management when in-kernel rc is used
>>> (and we can allow it with manual rate-control too). I like having this option
>>> because it allows for simple userspace implementations.
>>>
>>> Note that this could perhaps also be added as an optional feature
>>> for stateful encoders since some of them seem to be able to instruct the
>>> firmware what frame type to use (in addition to directly controlling QP).
>>> There's also a good chance that this feature is not available when using
>>> a firmware-backed rc algorithm.
>>>
>>>>> Manual control allows userspace to get creative and requires the ability to set
>>>>> the quantization parameter (QP) directly for each frame (controls are already
>>>>> as many stateful encoders also support it).
>>>>>
>>>>> # Regions of Interest
>>>>>
>>>>> Regions of interest (ROIs) allow specifying sub-regions of the frame that should
>>>>> be prioritized for quality. Stateless encoders typically support a limited
>>>>> number and allow setting specific QP values for these regions.
>>>>>
>>>>> While the QP value should be used directly in manual rate-control, we probably
>>>>> want to have some "level of importance" setting for kernel-side rate-control,
>>>>> along with the dimensions/position of each ROI. This could be expressed with
>>>>> a new structure containing all these elements and presented as a variable-sized
>>>>> array control with as many elements as the hardware can support.
>>>> Do you see any difference in ROI for stateful and stateless ? This looks like a
>>>> feature we could combined. Also, ROI exist for cameras too, I'd probably try and
>>>> keep them separate though.
>>> I feel like the stateful/stateless behavior should be the same, so that could be
>>> a shared control too. Also we could use a QP delta which would apply to both
>>> manual and in-kernel rate-control, but maybe that's too low-level in the latter
>>> case (not very obvious when a relevant delta could be when userspace has no idea
>>> of the current frame-wide QP value).
>>>
>>>> This is a very good overview of the hard work ahead of us. Looking forward on
>>>> this journey and your Allwinner driver.
>>> Thanks a lot for your input!
>>>
>>> Honestly I was expecting that it would be more difficult than decoding, but it
>>> turns out it might not be the case.
>>>
>> Such rate control or quailty report would be complete vendor special.
>>
>> We just need a method that let driver report those encoding statistic to the
>> userspace.
> Returning the encoded bitstream size is perfectly generic and available to
> every encoder. Maybe we could also return some average QP value since that
> seems quite common. Other than that the rest should be kept in-kernel so we
> can have a generic API.
>
You just throw the tools that a hardware could offer away.
> Also it seems that the Hantro H1 specific mechanism (checkpoint-based) is not
> necessarily a lot better than regular QP-wide settings.
>
Macroblock level QP control in Hantro H1 is very useful. For FOSS, those
vendor special statistic or controlling may not be necessary, while the
real product is not simple.
> Cheers,
>
> Paul
>
> -- Paul Kocialkowski, Bootlin Embedded Linux and kernel engineering
> https://bootlin.com
>
--
Hsia-Jun(Randy) Li
Powered by blists - more mailing lists