[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <6674abe5681c0f4a4e4466e034393359346cfda7.camel@ndufresne.ca>
Date: Wed, 28 Jun 2023 12:27:22 -0400
From: Nicolas Dufresne <nicolas@...fresne.ca>
To: Hsia-Jun Li <Randy.Li@...aptics.com>, ayaka <ayaka@...lik.info>
Cc: linux-media@...r.kernel.org, Brian.Starkey@....com,
boris.brezillon@...labora.com, frkoenig@...omium.org,
hans.verkuil@...co.com, hiroh@...omium.org, hverkuil@...all.nl,
kernel@...labora.com, laurent.pinchart@...asonboard.com,
linux-kernel@...r.kernel.org, mchehab@...nel.org,
narmstrong@...libre.com, sakari.ailus@....fi,
stanimir.varbanov@...aro.org, tfiga@...omium.org,
Helen Koike <helen.koike@...labora.com>
Subject: Re: [PATCH v7 1/9] media: v4l2: Extend pixel formats to unify
single/multi-planar handling (and more)
Hi,
please avoid HTML replies on the mailing list.
Le mardi 27 juin 2023 à 14:42 +0800, Hsia-Jun Li a écrit :
> > +/**
> > + * struct v4l2_ext_pix_format - extended multiplanar format definition
> > + * @type: enum v4l2_buf_type; type of the data stream
> > + * @width: image width in pixels
> > + * @height: image height in pixels
> > + * @pixelformat: little endian four character code (fourcc)
> > + * @modifier: modifier applied to the format (used for tiled formats
> > + * and other kind of HW-specific formats, like compressed
> > + * formats) as defined in drm_fourcc.h
> > + * @field: enum v4l2_field; field order (for interlaced video)
> > + * @colorspace: enum v4l2_colorspace; supplemental to pixelformat
> > + * @plane_fmt: per-plane information
> > + * @flags: format flags (V4L2_PIX_FMT_FLAG_*)
> > + * @ycbcr_enc: enum v4l2_ycbcr_encoding, Y'CbCr encoding
> > + * @hsv_enc: enum v4l2_hsv_encoding, HSV encoding
> > + * @quantization: enum v4l2_quantization, colorspace quantization
> > + * @xfer_func: enum v4l2_xfer_func, colorspace transfer function
> > + * @reserved: drivers and applications must zero this array
> > + */
> > +struct v4l2_ext_pix_format {
> > + __u32 type;
> > + __u32 width;
> > + __u32 height;
> > + __u32 pixelformat;
> > + __u64 modifier;
> > + __u32 field;
> > + __u32 colorspace;
> > +
> > + struct v4l2_plane_pix_format plane_fmt[VIDEO_MAX_PLANES];
> > + __u8 flags;
> > + union {
> > + __u8 ycbcr_enc;
> > + __u8 hsv_enc;
> > + };
> > + __u8 quantization;
> > + __u8 xfer_func;
> >
> >
> >
> >
> >
>
> I heard that a suggestion that we could remove colorimetry fields
> here.
> Although those are useless for codec M2M drivers if no pixel
> formats translation invoked.
> Even HDMI(DRM) cares about colorspace.
> For example if a downsink(TV) shows RGB formats, with an YUV
> input frame buffer, colorimetry would be important or the wrong
> EOTF would be used. If YUV is MPEG range(linear EOTF) while a
> non-linear EOFT (full range) is used, you would found the black is
> not black enough while the white looks a gray. Also color bias
> would happen.
> This problem may not happen to a ultra high resolution TV while
> only YUV type color formats are supported due to HDMI bandwidth
> limitation.
> The problem I want to raise is the time cost for enumeration.
> Each pixel format with a colorimetry setting would invoke a
> ioctl(). For the application likes Gstreamer would enum all the
> possible colorimetries.
> It would be better we could have something like DRM blob id that
> application could copy the data from a non-DMA buffer from the
> kernel.
This is a good topic. Colorimetry could indeed be moved away from the format,
considering they cannot be enumerated. It remains that this information needs to
be passed around, and the format of a blob in media space is not has restricted
as with display HW. I think keeping an "exploded version" of the colorimetry
remains needed.
Remember though that for stateful decoder, were the information could be stored
in the bitstream, the decoder is responsible for returning that information.
Currently its passed through the G_FMT call, it would need to be replaced with a
control, similar to the HDR10 static metadata. If the colorimetry is no longer
static in the future, and may change while streaming, one option would be RO
request. This was foreseen for HDR10+ and Dolby Vision metadata notably, though
other options exists.
There exist known decoders that can do YUV to RGB conversion using an inline
post procesor (VC8000D and newer is an example), and for these to return correct
colors, the colorimetry information needs to be passed. So its not strictly
useless.
In short, if we drop colorimetry from format, we also need to go ahead and
design a replacement for it, that allow for the application to detect changes.
regards,
Nicolas
Powered by blists - more mailing lists