lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <f8d06bee1a2b71239b737b6a83e1a6734f75c36a.camel@ndufresne.ca>
Date: Tue, 23 Jul 2024 15:23:11 -0400
From: Nicolas Dufresne <nicolas@...fresne.ca>
To: Jonas Karlman <jonas@...boo.se>, Benjamin Gaignard
 <benjamin.gaignard@...labora.com>, Hans Verkuil <hverkuil-cisco@...all.nl>,
  mchehab@...nel.org, ezequiel@...guardiasur.com.ar
Cc: linux-media@...r.kernel.org, linux-kernel@...r.kernel.org, 
	linux-rockchip@...ts.infradead.org, kernel@...labora.com
Subject: Re: [PATCH v4 0/2] Enumerate all pixels formats

Hi,

Le vendredi 19 juillet 2024 à 23:59 +0200, Jonas Karlman a écrit :
> Hi,
> 
> On 2024-07-19 17:36, Nicolas Dufresne wrote:
> > Hi,
> > 
> > Le vendredi 19 juillet 2024 à 15:47 +0200, Benjamin Gaignard a écrit :
> > > > What exactly is the problem you want to solve? A real-life problem, not a theoretical
> > > > one :-)
> > > 
> > > On real-life: on a board with 2 different stateless decoders being able to detect the
> > > one which can decode 10 bits bitstreams without testing all codec-dependent controls.
> > 
> > That leans toward giving an answer for the selected bitstream format though,
> > since the same driver may do 10bit HEVC without 10bit AV1.
> > 
> > For the use case, both Chromium and GStreamer have a need to categorized
> > decoders so that we avoid trying to use decoder that can't do that task. More
> > platforms are getting multiple decoders, and we also need to take into account
> > the available software decoders.
> > 
> > Just looking at the codec specific profile is insufficient since we need two
> > conditions to be met.
> > 
> > 1. The driver must support 10bit for the specific CODEC (for most codec this is
> > profile visible)
> > 2. The produced 10bit color format must be supported by userspace
> > 
> > In today's implementation, in order to test this, we'd need to simulate a 10bit
> > header control, so that when enumerating the formats we get a list of 10bit
> > (optionally 8bit too, since some decoder can downscale colors) and finally
> > verify that these pixel formats are know by userspace. This is not impossible,
> > but very tedious, this proposal we to try and make this easier.
> 
> I have also been wondering what the use-case of this would be, and if it
> is something to consider before a FFmpeg v4l2-request hwaccel submission.
> 
> I am guessing GStreamer may need to decide what decoder to use prior to
> bitstream parsing/decoding has started?

In GStreamer, the parsing is happening before the decoder. The parser generates
capabilities that are used during the decoder selection. Though, for performance
reason (GStreamer is plugin based, and loading all the plugins all the time
during playback is too slow), we generate static information about these
decoders and cache the results. This information usually only contains
profile/level and some min/max resolution when available. But being able to
discard profile if (as an example) the produced 10bit pixel format is not
supported by GStreamer is something we want in the long run. Its not essential
of course.

We do that limitation in the plugger that we can't fallback after the first
bitstream as been passed to the decoder, but this could in theory be resolved
and have nothing to do with this proposal. The proposal is just to help
userspace code stay simple and relatively CODEC agnostic.

> 
> For my re-worked FFmpeg v4l2-request hwaccel series, should hit
> ffmpeg-devel list any day now, we try to probe each video device one
> by one trying to identify if it will be capable to decode current stream
> into a known/supported capture format [1], this typically happen when
> header for first slice/frame has been parsed and is used to let driver
> select its preferred/optimal capture format. The first device where all
> test passes will be used and if none works FFmpeg video players will
> typically fallback to use software decoding.
> 
> This type of probing may be a little bit limiting and depend too heavy
> on the M2M Stateless Video Decoder Interface "It is suggested that the
> driver chooses the preferred/optimal format for the current
> configuration.".
> 
> Would you suggest I change how this probing is happening to make some
> more clever detection of what media+video device should be used for a
> specific stream with help of this new flag?

In general, there is only 1 or 2 decoder, so this process should be fast. I
guess if we start seeing large array or decoder (e.g. array of PCIe accelerators
or similar) that could become interesting to cache the information.

I think the difference is that you have the full bitstream header available at
the time you are to decide which decoder to use, I only have a abstract summary
in GStreamer. In short, you don't have to craft anything, which is nice.

Nicolas

> 
> [1] https://github.com/Kwiboo/FFmpeg/blob/v4l2request-2024-v2/libavcodec/v4l2_request_probe.c#L373-L424
> 
> Regards,
> Jonas
> 
> > 
> > Nicolas
> > 
> > 
> 
> _______________________________________________
> Kernel mailing list -- kernel@...lman.collabora.com
> To unsubscribe send an email to kernel-leave@...lman.collabora.com
> This list is managed by https://mailman.collabora.com


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ