[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAAFQd5DwEnKznBcJCiC06dU5xJO0Lw6fJSudEfntgLt8fmc+aQ@mail.gmail.com>
Date: Fri, 28 Jul 2023 16:18:29 +0900
From: Tomasz Figa <tfiga@...omium.org>
To: Randy Li <ayaka@...lik.info>
Cc: linux-media@...r.kernel.org, randy.li@...aptics.com,
Brian.Starkey@....com, boris.brezillon@...labora.com,
frkoenig@...omium.org, hans.verkuil@...co.com, hiroh@...omium.org,
hverkuil@...all.nl, kernel@...labora.com,
laurent.pinchart@...asonboard.com, linux-kernel@...r.kernel.org,
mchehab@...nel.org, narmstrong@...libre.com, nicolas@...fresne.ca,
sakari.ailus@....fi, stanimir.varbanov@...aro.org,
Helen Koike <helen.koike@...labora.com>
Subject: Re: [PATCH v7 2/9] media: vivid: Convert to v4l2_ext_pix_format
On Tue, Jul 18, 2023 at 1:00 AM Randy Li <ayaka@...lik.info> wrote:
>
>
> On 2023/7/13 18:39, Tomasz Figa wrote:
> > On Mon, Feb 06, 2023 at 12:33:01PM +0800, ayaka wrote:
> >> From: Helen Koike <helen.koike@...labora.com>
> >>
> >> Simplify Multi/Single planer API handling by converting to v4l2_ext_pix_format.
> >>
> >> Duplicate v4l2_ioctl_ops for touch devices. This is done to force the
> >> framework to use the ext hooks when the classic Api is used from
> >> userspace in Vid devices, and to keep touch devices with classic hook.
> >>
> >> Signed-off-by: Boris Brezillon <boris.brezillon@...labora.com>
> >> Signed-off-by: Helen Koike <helen.koike@...labora.com>
> >> ---
> >> Changes in v7:
> >> - Force the userspace using the new APIs to operate non-touch drivers.
> > The primary objective of Linux development is not to break the
> > userspace. We can't just remove the old API, especially not from
> > existing drivers.
> Maybe I should create a new virtual driver here? It is impossible to
> support the new fourcc modifier with the old APIs.
We need to find a way to make an existing driver support both the old
and new API. Obviously any new functionality of the new API doesn't
have to be retrofitted to the old API.
> >
> > [snip]
> >> int vivid_try_fmt_vid_cap(struct file *file, void *priv,
> >> - struct v4l2_format *f)
> >> + struct v4l2_ext_pix_format *f)
> >> {
> >> - struct v4l2_pix_format_mplane *mp = &f->fmt.pix_mp;
> >> - struct v4l2_plane_pix_format *pfmt = mp->plane_fmt;
> >> struct vivid_dev *dev = video_drvdata(file);
> >> + struct v4l2_plane_pix_format *pfmt = f->plane_fmt;
> >> const struct vivid_fmt *fmt;
> >> unsigned bytesperline, max_bpl;
> >> unsigned factor = 1;
> >> unsigned w, h;
> >> unsigned p;
> >> - bool user_set_csc = !!(mp->flags & V4L2_PIX_FMT_FLAG_SET_CSC);
> > Why is this condition being removed?
>
> Because the v4l2_ext_pix has a struct for the colorspace?
What do you mean? I see it has the same enum field for colorspace as
the original v4l2_pix_format_mplane.
The flag was needed for CAPTURE format to tell the driver whether it
should perform a conversion to the requested colorspace or just fill
in the color space as inferred from the current configuration (e.g.
OUTPUT format).
How was that addressed in the new API?
>
> Would you like the idea that driver exports a buffer contains all the
> info for an enumeration ?
>
> >
> > Best regards,
> > Tomasz
> >
> >>
> >> - fmt = vivid_get_format(dev, mp->pixelformat);
> >> + fmt = vivid_get_format(dev, f->pixelformat);
> >> if (!fmt) {
> >> dprintk(dev, 1, "Fourcc format (0x%08x) unknown.\n",
> >> - mp->pixelformat);
> >> - mp->pixelformat = V4L2_PIX_FMT_YUYV;
> >> - fmt = vivid_get_format(dev, mp->pixelformat);
> >> + f->pixelformat);
> >> + f->pixelformat = V4L2_PIX_FMT_YUYV;
> >> + fmt = vivid_get_format(dev, f->pixelformat);
> >> }
> >>
> >> - mp->field = vivid_field_cap(dev, mp->field);
> >> + f->field = vivid_field_cap(dev, f->field);
> >> if (vivid_is_webcam(dev)) {
> >> const struct v4l2_frmsize_discrete *sz =
> >> v4l2_find_nearest_size(webcam_sizes,
> >> VIVID_WEBCAM_SIZES, width,
> >> - height, mp->width, mp->height);
> >> + height, f->width, f->height);
> >>
> >> w = sz->width;
> >> h = sz->height;
> >> @@ -604,14 +603,14 @@ int vivid_try_fmt_vid_cap(struct file *file, void *priv,
> >> w = dev->src_rect.width;
> >> h = dev->src_rect.height;
> >> }
> >> - if (V4L2_FIELD_HAS_T_OR_B(mp->field))
> >> + if (V4L2_FIELD_HAS_T_OR_B(f->field))
> >> factor = 2;
> >> if (vivid_is_webcam(dev) ||
> >> (!dev->has_scaler_cap && !dev->has_crop_cap && !dev->has_compose_cap)) {
> >> - mp->width = w;
> >> - mp->height = h / factor;
> >> + f->width = w;
> >> + f->height = h / factor;
> >> } else {
> >> - struct v4l2_rect r = { 0, 0, mp->width, mp->height * factor };
> >> + struct v4l2_rect r = { 0, 0, f->width, f->height * factor };
> >>
> >> v4l2_rect_set_min_size(&r, &vivid_min_rect);
> >> v4l2_rect_set_max_size(&r, &vivid_max_rect);
> >> @@ -624,16 +623,15 @@ int vivid_try_fmt_vid_cap(struct file *file, void *priv,
> >> } else if (!dev->has_scaler_cap && !dev->has_crop_cap) {
> >> v4l2_rect_set_min_size(&r, &dev->src_rect);
> >> }
> >> - mp->width = r.width;
> >> - mp->height = r.height / factor;
> >> + f->width = r.width;
> >> + f->height = r.height / factor;
> >> }
> >>
> >> /* This driver supports custom bytesperline values */
> >>
> >> - mp->num_planes = fmt->buffers;
> >> for (p = 0; p < fmt->buffers; p++) {
> >> /* Calculate the minimum supported bytesperline value */
> >> - bytesperline = (mp->width * fmt->bit_depth[p]) >> 3;
> >> + bytesperline = (f->width * fmt->bit_depth[p]) >> 3;
> >> /* Calculate the maximum supported bytesperline value */
> >> max_bpl = (MAX_ZOOM * MAX_WIDTH * fmt->bit_depth[p]) >> 3;
> >>
> >> @@ -642,48 +640,49 @@ int vivid_try_fmt_vid_cap(struct file *file, void *priv,
> >> if (pfmt[p].bytesperline < bytesperline)
> >> pfmt[p].bytesperline = bytesperline;
> >>
> >> - pfmt[p].sizeimage = (pfmt[p].bytesperline * mp->height) /
> >> + pfmt[p].sizeimage = (pfmt[p].bytesperline * f->height) /
> >> fmt->vdownsampling[p] + fmt->data_offset[p];
> >> -
> >> - memset(pfmt[p].reserved, 0, sizeof(pfmt[p].reserved));
> >> }
> >> +
> >> + if (p < VIDEO_MAX_PLANES)
> >> + pfmt[p].sizeimage = 0;
> >> +
> >> for (p = fmt->buffers; p < fmt->planes; p++)
> >> - pfmt[0].sizeimage += (pfmt[0].bytesperline * mp->height *
> >> + pfmt[0].sizeimage += (pfmt[0].bytesperline * f->height *
> >> (fmt->bit_depth[p] / fmt->vdownsampling[p])) /
> >> (fmt->bit_depth[0] / fmt->vdownsampling[0]);
> >>
> >> - if (!user_set_csc || !v4l2_is_colorspace_valid(mp->colorspace))
> >> - mp->colorspace = vivid_colorspace_cap(dev);
> >> + if (!v4l2_is_colorspace_valid(f->colorspace))
> >> + f->colorspace = vivid_colorspace_cap(dev);
> >>
> >> - if (!user_set_csc || !v4l2_is_xfer_func_valid(mp->xfer_func))
> >> - mp->xfer_func = vivid_xfer_func_cap(dev);
> >> + if (!v4l2_is_xfer_func_valid(f->xfer_func))
> >> + f->xfer_func = vivid_xfer_func_cap(dev);
> >>
> >> if (fmt->color_enc == TGP_COLOR_ENC_HSV) {
> >> - if (!user_set_csc || !v4l2_is_hsv_enc_valid(mp->hsv_enc))
> >> - mp->hsv_enc = vivid_hsv_enc_cap(dev);
> >> + if (!v4l2_is_hsv_enc_valid(f->hsv_enc))
> >> + f->hsv_enc = vivid_hsv_enc_cap(dev);
> >> } else if (fmt->color_enc == TGP_COLOR_ENC_YCBCR) {
> >> - if (!user_set_csc || !v4l2_is_ycbcr_enc_valid(mp->ycbcr_enc))
> >> - mp->ycbcr_enc = vivid_ycbcr_enc_cap(dev);
> >> + if (!v4l2_is_ycbcr_enc_valid(f->ycbcr_enc))
> >> + f->ycbcr_enc = vivid_ycbcr_enc_cap(dev);
> >> } else {
> >> - mp->ycbcr_enc = vivid_ycbcr_enc_cap(dev);
> >> + f->ycbcr_enc = vivid_ycbcr_enc_cap(dev);
> >> }
> >>
> >> if (fmt->color_enc == TGP_COLOR_ENC_YCBCR ||
> >> fmt->color_enc == TGP_COLOR_ENC_RGB) {
> >> - if (!user_set_csc || !v4l2_is_quant_valid(mp->quantization))
> >> - mp->quantization = vivid_quantization_cap(dev);
> >> + if (!v4l2_is_quant_valid(f->quantization))
> >> + f->quantization = vivid_quantization_cap(dev);
> >> } else {
> >> - mp->quantization = vivid_quantization_cap(dev);
> >> + f->quantization = vivid_quantization_cap(dev);
> >> }
> >>
> >> - memset(mp->reserved, 0, sizeof(mp->reserved));
> >> + memset(f->reserved, 0, sizeof(f->reserved));
> >> return 0;
> >> }
> > [snip]
Powered by blists - more mailing lists