[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <a2e8e01ea754232dd3562b34702b6600d7358605.camel@collabora.com>
Date: Thu, 10 Aug 2023 10:34:31 -0400
From: Nicolas Dufresne <nicolas.dufresne@...labora.com>
To: Paul Kocialkowski <paul.kocialkowski@...tlin.com>,
linux-kernel@...r.kernel.org, linux-media@...r.kernel.org,
Hans Verkuil <hverkuil@...all.nl>,
Sakari Ailus <sakari.ailus@....fi>,
Andrzej Pietrasiewicz <andrzej.p@...labora.com>,
Michael Tretter <m.tretter@...gutronix.de>
Cc: Jernej Škrabec <jernej.skrabec@...il.com>,
Chen-Yu Tsai <wens@...e.org>,
Samuel Holland <samuel@...lland.org>,
Thomas Petazzoni <thomas.petazzoni@...tlin.com>
Subject: Re: Stateless Encoding uAPI Discussion and Proposal
Le jeudi 10 août 2023 à 15:44 +0200, Paul Kocialkowski a écrit :
> Hi folks,
>
> On Tue 11 Jul 23, 19:12, Paul Kocialkowski wrote:
> > I am now working on a H.264 encoder driver for Allwinner platforms (currently
> > focusing on the V3/V3s), which already provides some usable bitstream and will
> > be published soon.
>
> So I wanted to shared an update on my side since I've been making progress on
> the H.264 encoding work for Allwinner platforms. At this point the code supports
> IDR, I and P frames, with a single reference. It also supports GOP (both closed
> and open with IDR or I frame interval and explicit keyframe request) but uses
> QP controls and does not yet provide rate control. I hope to be able to
> implement rate-control before we can make a first public release of the code.
Just a reminder that we will code review the API first, the supporting
implementation will just be companion. So in this context, the sooner the better
for an RFC here.
>
> One of the main topics of concern now is how reference frames should be managed
> and how it should interact with kernel-side GOP management and rate control.
Maybe we need to have a discussion about kernel side GOP management first ?
While I think kernel side rate control is un-avoidable, I don't think stateless
encoder should have kernel side GOP management.
>
> Leaving GOP management to the kernel-side implies having it decide which frame
> should be IDR, I or P (and B for encoders that can support it), while keeping
> the possibility to request a keyframe (IDR) and configure GOP size. Now it seems
> to me that this is already a good balance between giving userspace a decent
> level of control while not having to specify the frame type explicitly for each
> frame or maintain a GOP in userspace.
My expectation for stateless encoder is to have to specify the frame type and
the associate references if the type requires it.
>
> Requesting the frame type explicitly seems more fragile as many situations will
> be invalid (e.g. requesting a P frame at the beginning of the stream, etc) and
> it generally requires userspace to know a lot about what the codec assumptions
> are. Also for B frames the decision would need to be consistent with the fact
> that a following frame (in display order) would need to be submitted earlier
> than the current frame and inform the kernel so that the picture order count
> (display order indication) can be maintained. This is not impossible or out of
> reach, but it brings a lot of complexity for little advantage.
We have had a lot more consistent results over the last decade with stateless
hardware codecs in contrast to stateful where we endup with wide variation in
behaviour. This applies to Chromium, GStreamer and any active users of VA
encoders really. I'm strongly in favour for stateless reference API out of the
Linux kernel.
>
> Leaving the decision to the kernel side with some hints (whether to force a
> keyframe, whether to allow B frames) seems a lot easier, especially for B frames
> since the kernel could just receive frames in-order and decide to hold one
> so that it can use the next frame submitted as a forward reference for this
> upcoming B frame. This requires flushing support but it's already well in place
> for stateful encoders.
No, its a lot harder for users. The placement of keyframe should be bound to
various image analyses and streaming conditions like scene change detection,
network traffic, but also, I strictly don't want to depend on the Linux kernel
when its time to implement a custom reference tree. In general, stateful decoder
are never up to the game of modern RTP features and other fancy robust
referencing model. I overall have to disagree with your proposed approach. I
believe we have to create a stateless encoder interface and not a completely
abstract this hardware over our existing stateful interface. We should take
adventage of the nature of the hardware to make simpler and safer driver.
>
> The next topic of interest is reference management. It seems pretty clear that
> the decision of whether a frame should be a reference or not always needs to be
> taken when encoding that frame. In H.264 the nal_ref_idc slice header element
> indicates whether a frame is marked as reference or not. IDR frames can
> additionally be marked as long-term reference (if I understood correctly, the
> frame will stay in the reference picture list until the next IDR frame).
This is incorrect. Any frames can be marked as long term reference, it does not
matter what type they are. From what I recall, marking of the long term in the
bitstream is using a explicit IDX, so there is no specific rules on which one
get evicted. Long term of course are limited as they occupy space in the DPB.
Also, Each CODEC have different DPB semantic. For H.264, the DPB can run in two
modes. The first is a simple fifo, in this case, any frame you encode and want
to keep as reference is pushed into the DPB (which has a fixed size minus the
long term). If full, the oldest frame is removed. It is not bound to IDR or GOP.
Though, an IDR will implicitly cause the decoder to evict everything (including
long term).
The second mode uses the memory management commands. This is a series if
instruction that the encoder can send to the decoder. The specification is quite
complex, it is a common source of bugs in decoders and a place were stateless
hardware codecs performs more consistently in general. Through the commands, the
encoder ensure that the decoder dpb representation stay on sync.
> Frames that are marked as reference are added to the l0/l1 lists implicitly
> that way and are evicted mostly depending on the number of reference slots
> available, or when a new GOP is started.
Be aware that "slots" is a hardware implementation detail. I think it can be
used for any MPEG CODEC, but be careful since slots in AV1 specification have a
completely different meaning. Generalization of slots will create confusion.
>
> With the frame type decided by the kernel, it becomes nearly impossible for
> userspace to keep track of the reference lists. Userspace would at least need
> to know when an IDR frame is produced to flush the reference lists. In addition
> it looks like most hardware doesn't have a way to explicitly discard previous
> frames that were marked as reference from being used as reference for next
> frames. All in all this means that we should expect little control over the
> reference frames list.
>
> As a result my updated proposal would be to have userspace only indicate whether
> a submitted frame should be marked as a reference or not instead of submitting
> an explicit list of previous buffers that should be used as reference, which
> would be impossible to honor in many cases.
>
> Addition information gathered:
> - It seems likely that the Allwinner Video Engine only supports one reference
> frame. There's a register for specifying the rec buffer of a second one but
> I have never seen the proprietary blob use it. It might be as easy as
> specifying a non-zero address there but it might also be ignored or require
> some undocumented bit to use more than one reference. I haven't made any
> attempt at using it yet.
There is something in that fact that makes me think of Hantro H1. Hantro H1 also
have a second reference, but non one ever use it. We have on our todo to
actually give this a look.
> - Contrary to what I said after Andrzej's talk at EOSS, most Allwinner platforms
> do not support VP8 encode (despite Allwinner's proprietary blob having an
> API for it). The only platform that advertises it is the A80 and this might
> actually be a VP8-only Hantro H1. It seems that the API they developed in the
> library stuck around even if no other platform can use it.
Thanks for letting us know. Our assumption is that a second hardware design is
unlikely as Google was giving it for free to any hardware makers that wanted it.
>
> Sorry for the long email again, I'm trying to be a bit more explanatory than
> just giving some bare conclusions that I drew on my own.
>
> What do you think about these ideas?
In general, we diverge on the direction we want the interface to be. What you
seem to describe now is just a normal stateful encoder interface with everything
needed to drive the stateless hardware implemented in the Linux kernel. There is
no parsing or other unsafety in encoders, so I don't have a strict no-go
argument for that, but for me, it means much more complex drivers and lesser
flexibility. The VA model have been working great for us in the past, giving us
the ability to implement new feature, or even slightly of spec features. While,
the Linux kernel might not be the right place for these experimental methods.
Personally, I would rather discuss around your uAPI RFC though, I think a lot of
other devs here would like to see what you have drafted.
Nicolas
>
> Cheers,
>
> Paul
>
> >
> > This is a very long email where I've tried to split things into distinct topics
> > and explain a few concepts to make sure everyone is on the same page.
> >
> > # Bitstream Headers
> >
> > Stateless encoders typically do not generate all the bitstream headers and
> > sometimes no header at all (e.g. Allwinner encoder does not even produce slice
> > headers). There's often some hardware block that makes bit-level writing to the
> > destination buffer easier (deals with alignment, etc).
> >
> > The values of the bitstream headers must be in line with how the compressed
> > data bitstream is generated and generally follow the codec specification.
> > Some encoders might allow configuring all the fields found in the headers,
> > others may only allow configuring a few or have specific constraints regarding
> > which values are allowed.
> >
> > As a result, we cannot expect that any given encoder is able to produce frames
> > for any set of headers. Reporting related constraints and limitations (beyond
> > profile/level) seems quite difficult and error-prone.
> >
> > So it seems that keeping header generation in-kernel only (close to where the
> > hardware is actually configured) is the safest approach.
> >
> > # Codec Features
> >
> > Codecs have many variable features that can be enabled or not and specific
> > configuration fields that can take various values. There is usually some
> > top-level indication of profile/level that restricts what can be used.
> >
> > This is a very similar situation to stateful encoding, where codec-specific
> > controls are used to report and set profile/level and configure these aspects.
> > A particularly nice thing about it is that we can reuse these existing controls
> > and add new ones in the future for features that are not yet covered.
> >
> > This approach feels more flexible than designing new structures with a selected
> > set of parameters (that could match the existing controls) for each codec.
> >
> > # Reference and Reconstruction Management
> >
> > With stateless encoding, we need to tell the hardware which frames need to be
> > used as references for encoding the current frame and make sure we have the
> > these references available as decoded frames in memory.
> >
> > Regardless of references, stateless encoders typically need some memory space to
> > write the decoded (known as reconstructed) frame while it's being encoded.
> >
> > One question here is how many slots for decoded pictures should be allocated
> > by the driver when starting to stream. There is usually a maximum number of
> > reference frames that can be used at a time, although perhaps there is a use
> > case to keeping more around and alternative between them for future references.
> >
> > Another question is how the driver should keep track of which frame will be used
> > as a reference in the future and which one can be evicted from the pool of
> > decoded pictures if it's not going to be used anymore.
> >
> > A restrictive approach would be to let the driver alone manage that, similarly
> > to how stateful encoders behave. However it might provide extra flexibility
> > (and memory gain) to allow userspace to configure the maximum number of possible
> > reference frames. In that case it becomes necessary to indicate if a given
> > frame will be used as a reference in the future (maybe using a buffer flag)
> > and to indicate which previous reference frames (probably to be identified with
> > the matching output buffer's timestamp) should be used for the current encode.
> > This could be done with a new dedicated control (as a variable-sized array of
> > timestamps). Note that userspace would have to update it for every frame or the
> > reference frames will remain the same for future encodes.
> >
> > The driver will then make sure to keep the reconstructed buffer around, in one
> > of the slots. When there's no slot left, the driver will drop the oldest
> > reference it has (maybe with a bounce buffer to still allow it to be used as a
> > reference for the current encode).
> >
> > With this behavior defined in the uAPI spec, userspace will also be able to
> > keep track of which previous frame is no longer allowed as a reference.
> >
> > # Frame Types
> >
> > Stateless encoder drivers will typically instruct the hardware to encode either
> > an intra-coded or an inter-coded frame. While a stream composed only of a single
> > intra-coded frame followed by only inter-coded frames is possible, it's
> > generally not desirable as it is not very robust against data loss and makes
> > seeking difficult.
> >
> > As a result, the frame type is usually decided based on a given GOP size
> > (the frequency at which a new intra-coded frame is produced) while intra-coded
> > frames can be explicitly requested upon request. Stateful encoders implement
> > these through dedicated controls:
> > - V4L2_CID_MPEG_VIDEO_FORCE_KEY_FRAME
> > - V4L2_CID_MPEG_VIDEO_GOP_SIZE
> > - V4L2_CID_MPEG_VIDEO_H264_I_PERIOD
> >
> > It seems that reusing them would be possible, which would let the driver decide
> > of the particular frame type.
> >
> > However it makes the reference frame management a bit trickier since reference
> > frames might be requested from userspace for a frame that ends up being
> > intra-coded. We can either allow this and silently ignore the info or expect
> > that userspace keeps track of the GOP index and not send references on the first
> > frame.
> >
> > In some codecs, there's also a notion of barrier key-frames (IDR frames in
> > H.264) that strictly forbid using any past reference beyond the frame.
> > There seems to be an assumption that the GOP start uses this kind of frame
> > (and not any intra-coded frame), while the force key frame control does not
> > particularly specify it.
> >
> > In that case we should flush the list of references and userspace should no
> > longer provide references to them for future frames. This puts a requirement on
> > userspace to keep track of GOP start in order to know when to flush its
> > reference list. It could also check if V4L2_BUF_FLAG_KEYFRAME is set, but this
> > could also indicate a general intra-coded frame that is not a barrier.
> >
> > So another possibility would be for userspace to explicitly indicate which
> > frame type to use (in a codec-specific way) and act accordingly, leaving any
> > notion of GOP up to userspace. I feel like this might be the easiest approach
> > while giving an extra degree of control to userspace.
> >
> > # Rate Control
> >
> > Another important feature of encoders is the ability to control the amount of
> > data produced following different rate control strategies. Stateful encoders
> > typically do this in-firmware and expose controls for selecting the strategy
> > and associated targets.
> >
> > It seems desirable to support both automatic and manual rate-control to
> > userspace.
> >
> > Automatic control would be implemented kernel-side (with algos possibly shared
> > across drivers) and reuse existing stateful controls. The advantage is
> > simplicity (userspace does not need to carry its own rate-control
> > implementation) and to ensure that there is a built-in mechanism for common
> > strategies available for every driver (no mandatory dependency on a proprietary
> > userspace stack). There may also be extra statistics or controls available to
> > the driver that allow finer-grain control.
> >
> > Manual control allows userspace to get creative and requires the ability to set
> > the quantization parameter (QP) directly for each frame (controls are already
> > as many stateful encoders also support it).
> >
> > # Regions of Interest
> >
> > Regions of interest (ROIs) allow specifying sub-regions of the frame that should
> > be prioritized for quality. Stateless encoders typically support a limited
> > number and allow setting specific QP values for these regions.
> >
> > While the QP value should be used directly in manual rate-control, we probably
> > want to have some "level of importance" setting for kernel-side rate-control,
> > along with the dimensions/position of each ROI. This could be expressed with
> > a new structure containing all these elements and presented as a variable-sized
> > array control with as many elements as the hardware can support.
> >
> > --
> > Paul Kocialkowski, Bootlin
> > Embedded Linux and kernel engineering
> > https://bootlin.com
>
>
>
Powered by blists - more mailing lists