[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAAFQd5A0gg4RCKkPd-m2_5=ZyDzZ7hnH9AnTrt7ciXQPPHZU2Q@mail.gmail.com>
Date: Wed, 3 Jul 2019 18:04:21 +0900
From: Tomasz Figa <tfiga@...omium.org>
To: Nicolas Dufresne <nicolas@...fresne.ca>
Cc: Hans Verkuil <hverkuil-cisco@...all.nl>,
Linux Media Mailing List <linux-media@...r.kernel.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Alexandre Courbot <acourbot@...omium.org>,
Philipp Zabel <p.zabel@...gutronix.de>,
Stanimir Varbanov <stanimir.varbanov@...aro.org>,
Andrew-CT Chen <andrew-ct.chen@...iatek.com>,
Tiffany Lin <tiffany.lin@...iatek.com>,
Pawel Osciak <posciak@...omium.org>
Subject: Re: [PATCHv4 0/2] Document memory-to-memory video codec interfaces
On Wed, Jun 5, 2019 at 12:19 AM Nicolas Dufresne <nicolas@...fresne.ca> wrote:
>
> Le lundi 03 juin 2019 à 13:28 +0200, Hans Verkuil a écrit :
> > Since Thomasz was very busy with other things, I've taken over this
> > patch series. This v4 includes his draft changes and additional changes
> > from me.
> >
> > This series attempts to add the documentation of what was discussed
> > during Media Workshops at LinuxCon Europe 2012 in Barcelona and then
> > later Embedded Linux Conference Europe 2014 in Düsseldorf and then
> > eventually written down by Pawel Osciak and tweaked a bit by Chrome OS
> > video team (but mostly in a cosmetic way or making the document more
> > precise), during the several years of Chrome OS using the APIs in
> > production.
> >
> > Note that most, if not all, of the API is already implemented in
> > existing mainline drivers, such as s5p-mfc or mtk-vcodec. Intention of
> > this series is just to formalize what we already have.
> >
> > Thanks everyone for the huge amount of useful comments to previous
> > versions of this series. Much of the credits should go to Pawel Osciak
> > too, for writing most of the original text of the initial RFC.
> >
> > This v4 incorporates all known comments (let me know if I missed
> > something!) and should be complete for the decoder.
> >
> > For the encoder there are two remaining TODOs for the API:
> >
> > 1) Setting the frame rate so bitrate control can make sense, since
> > they need to know this information.
> >
> > Suggested solution: require support for ENUM_FRAMEINTERVALS for the
> > coded pixelformats and S_PARM(OUTPUT). Open question: some drivers
> > (mediatek, hva, coda) require S_PARM(OUTPUT), some (venus) allow both
> > S_PARM(CAPTURE) and S_PARM(OUTPUT). I am inclined to allow both since
> > this is not a CAPTURE vs OUTPUT thing, it is global to both queues.
>
> I agree, as long as it's documented. I can imagine how this could be
> confusing for new users.
>
> >
> > 2) Interactions between OUTPUT and CAPTURE formats.
> >
> > The main problem is what to do if the capture sizeimage is too small
> > for the OUTPUT resolution when streaming starts.
> >
> > Proposal: width and height of S_FMT(OUTPUT) are used to
> > calculate a minimum sizeimage (app may request more). This is
> > driver-specific.
> >
> > V4L2_FMT_FLAG_FIXED_RESOLUTION is always set for codec formats
> > for the encoder (i.e. we don't support mid-stream resolution
> > changes for now) and V4L2_EVENT_SOURCE_CHANGE is not
> > supported. See https://patchwork.linuxtv.org/patch/56478/ for
> > the patch adding this flag.
> >
> > Of course, if we start to support mid-stream resolution
> > changes (or other changes that require a source change event),
> > then this flag should be dropped by the encoder driver and
> > documentation on how to handle the source change event should
> > be documented in the encoder spec. I prefer to postpone this
> > until we have an encoder than can actually do mid-stream
> > resolution changes.
> >
> > If sizeimage of the OUTPUT is too small for the CAPTURE
> > resolution and V4L2_EVENT_SOURCE_CHANGE is not supported,
> > then the second STREAMON (either CAPTURE or OUTPUT) will
> > return -ENOMEM since there is not enough memory to do the
> > encode.
>
> You seem confident that we will know immediately if it's too small. But
> what I remember is that HW has an interrupt for this, allowing
> userspace to allocate a larger buffer and resume.
>
> Should we make the capture queue independent of the streaming state, so
> that we can streamoff/reqbufs/.../streamon to resume from an ENOMEM
> error ? And shouldn't ENOMEM be returned by the following capture DQBUF
> when such an interrupt is raised ?
>
The idea was that stopping the CAPTURE queue would reset the encoder,
i.e. start encoding a new, independent stream after the streaming
starts again. Still, given that one would normally need to reallocate
the buffers on some significant stream parameter change, that would
normally require emitting all the relevant headers anyway, so it
probably doesn't break anything?
Best regards,
Tomasz
Powered by blists - more mailing lists