lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:   Fri,  8 May 2020 11:51:27 +0530
From:   Dikshita Agarwal <dikshita@...eaurora.org>
To:     linux-media@...r.kernel.org, stanimir.varbanov@...aro.org
Cc:     linux-kernel@...r.kernel.org, linux-arm-msm@...r.kernel.org,
        vgarodia@...eaurora.org, majja@...eaurora.org, jdas@...eaurora.org,
        Dikshita Agarwal <dikshita@...eaurora.org>
Subject: [RFC] METADATA design using V4l2 Request API

There are many commercialized video use cases which needs metadata info
to be circulated between v4l2 client and v4l2 driver.

METADATA has following requirements associated:
•Metadata is an optional info available for a buffer. It is not mandatorily for every buffer.
 For ex. consider metadata ROI (Region Of Interest). ROI is specified by clients to indicate
 the region where enhanced quality is desired. This metadata is given as an input information
 to encoder output plane. Client may or may not specify the ROI for a frame during encode as
 an input metadata. Also if the client has not provided ROI metadata for a given frame,
 it would be incorrect to take the metadata from previous frame. If the data and
 metadata is asynchronous, it would be difficult for hardware to decide if it
 needs to wait for metadata buffer or not before processing the input frame for encoding.
•Synchronize the buffer requirement across both the video node/session
 (incase metadata is being processed as a separate v4l2 video node/session).
 This is to avoid buffer starvation.
•Associate the metadata buffer with the data buffer without adding any pipeline delay
 in waiting for each other. This is applicable both at the hardware side (the processing end)
 and client side (the receiving end).
•Low latency usecases like WFD/split rendering/game streaming/IMS have sub-50ms e2e latency
 requirements, and it is not practical to stall the pipeline due to inherent framework latencies.
 High performance usecase like high-frame rate playback/record can lead to frame loss during any pipeline latency.
 
To address all above requirements, we used v4l2 Request API as interlace.

As an experiment, We have introduced new control V4L2_CID_MPEG_VIDEO_VENUS_METADATA
to contain the METADATA info. Exact controls can be finalized once the interface is discussed.

For setting metadata from userspace to kernel, let say on encode output plane,
following code sequence was followed
1. Video driver is registering for media device and creating a media node in /dev
2. Request fd is allocated by calling MEDIA_IOC_REQUEST_ALLOC IOCTL on media fd.
3. METADATA configuration is being applied on request fd using VIDIOC_S_EXT_CTRLS IOCTL
   and the same request fd is added to buf structure structure before calling VIDIOC_QBUF on video fd.
4. The same request is queued through MEDIA_REQUEST_IOC_QUEUE IOCTL to driver then, as a result
   to which METADATA control will be applied to buffer through S_CTRL.
5. Once control is applied and request is completed, MEDIA_REQUEST_IOC_REINIT IOCTL is called
   to re-initialize the request.

We could achieve the same on capture plane as well by removing few checks present currently
in v4l2 core which restrict the implementation to only output plane.

We profiled below data with this implementation :
1. Total time taken ( round trip ) for setting up control data on video driver
   with VIDIOC_S_EXT_CTRLS, QBUF and Queue Request: 737us
2. Time taken for first QBUF on Output plane to reach driver with REQUEST API enabled (One way): 723us
3. Time taken for first QBUF on Output plane to reach driver without REQUEST API (One way) : 250us
4. Time taken by each IOCTL to complete ( round trip ) with REQUEST API enabled :
    a. VIDIOC_S_EXT_CTRLS : 201us
    b. VIDIOC_QBUF : 92us
    c. MEDIA_REQUEST_IOC_QUEUE: 386us

Kindly share your feedback/comments on the design/call sequence.
Also as we experimented and enabled the metadata on capture plane as well, please comment if any issue in
allowing the metadata exchange on capture plane as well.

Reference for client side implementation can be found at [1].

Thanks,
Dikshita

[1] https://git.linaro.org/people/stanimir.varbanov/v4l2-encode.git/log/?h=dikshita/request-api

Dikshita Agarwal (3):
  Register for media device     
    - Initialize and register for media device     
    - define venus_m2m_media_ops     
    - Implement APIs to register/unregister media controller.
  Enable Request API for output buffers     
    - Add dependency on MEDIA_CONTROLLER_REQUEST_API in Kconfig.     
    - Initialize vb2 ops buf_out_validate and buf_request_complete.     
    - Add support for custom Metadata control V4L2_CID_MPEG_VIDEO_VENUS_METADATA     
    - Implemeted/Integrated APIs for Request setup/complete.
  Enable Request API for Capture Buffers

 drivers/media/common/videobuf2/videobuf2-v4l2.c |   4 +-
 drivers/media/platform/Kconfig                  |   2 +-
 drivers/media/platform/qcom/venus/core.h        |  36 ++++
 drivers/media/platform/qcom/venus/helpers.c     | 247 +++++++++++++++++++++++-
 drivers/media/platform/qcom/venus/helpers.h     |  15 ++
 drivers/media/platform/qcom/venus/venc.c        |  63 +++++-
 drivers/media/platform/qcom/venus/venc_ctrls.c  |  61 +++++-
 drivers/media/v4l2-core/v4l2-ctrls.c            |  10 +
 drivers/media/v4l2-core/v4l2-mem2mem.c          |  17 +-
 include/media/v4l2-ctrls.h                      |   1 +
 include/media/venus-ctrls.h                     |  22 +++
 11 files changed, 465 insertions(+), 13 deletions(-)
 create mode 100644 include/media/venus-ctrls.h

-- 
1.9.1

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ