lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <f80dfa8f-0188-a733-bde6-e3210977d910@nvidia.com>
Date:   Fri, 3 Apr 2020 00:36:12 -0700
From:   Sowjanya Komatineni <skomatineni@...dia.com>
To:     Laurent Pinchart <laurent.pinchart@...asonboard.com>
CC:     Hans Verkuil <hverkuil@...all.nl>,
        Sakari Ailus <sakari.ailus@....fi>, <thierry.reding@...il.com>,
        <jonathanh@...dia.com>, <frankc@...dia.com>,
        <helen.koike@...labora.com>, <digetx@...il.com>,
        <sboyd@...nel.org>, <linux-media@...r.kernel.org>,
        <devicetree@...r.kernel.org>, <linux-clk@...r.kernel.org>,
        <linux-tegra@...r.kernel.org>, <linux-kernel@...r.kernel.org>
Subject: Re: [RFC PATCH v5 6/9] media: tegra: Add Tegra210 Video input driver

As we don't need have MC based for tegra internal TPG, will continue 
with video node based for CSI sub-device in this series.

Next series will include sensor support, will discuss internally by then 
and will implement accordingly.

Thanks

Sowjanya


On 4/1/20 11:24 AM, Sowjanya Komatineni wrote:
>
> On 4/1/20 9:58 AM, Laurent Pinchart wrote:
>> External email: Use caution opening links or attachments
>>
>>
>> Hi Sowjanya,
>>
>> On Wed, Apr 01, 2020 at 09:36:03AM -0700, Sowjanya Komatineni wrote:
>>> Hi Sakari/Laurent,
>>>
>>> Few questions to confirm my understanding on below discussion.
>>>
>>> 1. Some sensors that you are referring as don't work with single 
>>> devnode
>>> controlling pipeline devices are ISP built-in sensors where setup of
>>> pipeline and subdevices happen separately?
>> Sensors that include ISPs could indeed require to be exposed as multiple
>> subdevs, but I was mostly referring to raw Bayer sensors with hardware
>> architectures similar to the SMIA++ and MIPI CCS specifications. Those
>> sensors can perform cropping in up to three different locations (analog
>> crop, digital crop, output crop), and can also scale in up to three
>> different locations (binning, skipping and filter-based scaling).
>>
>> Furthermore, with the V4L2 support for multiplexed streams that we are
>> working on, a sensor that can produce both image data and embedded data
>> would also need to be split in multiple subdevs.
>
> Thanks Laurent.
>
> For sensors with meta/embedded data along with image in same frame, 
> Tegra VI HW extracts based on programmed embedded data size info.
>
> So in our driver we capture this as separate buffer as embedded data 
> is part of frame.
>
> You above comment on multiplexed streams is for sensors using 
> different virutal channels for diff streams?
>
>
>>> 2. With driver supporting single device node control of entire pipeline
>>> devices compared to MC-based, limitation is with userspace apps for 
>>> only
>>> these complex camera sensors?
>> In those cases, several policy decisions on how to configure the sensor
>> (whether to use binning, skipping and/or filter-based scaling for
>> instance, or how much cropping and scaling to apply to achieve a certain
>> output resolution) will need to be implemented in the kernel, and
>> userspace will not have any control on them.
>>
>>> 3. Does all upstream video capture drivers eventually will be moved to
>>> support MC-based?
>> I think we'll see a decrease of the video-node-centric drivers in the
>> future for embedded systems, especially the ones that include an ISP.
>> When a system has an ISP, even if the ISP is implemented as a
>> memory-to-memory device separate from the CSI-2 capture side, userspace
>> will likely have a need for fine-grained control of the camera sensor.
>>
>>> 4. Based on libcamera doc looks like it will work with both types of
>>> MC-based and single devnode based pipeline setup drivers for normal
>>> sensors and limitation is when we use ISP built-in sensor or ISP HW
>>> block. Is my understanding correct?
>> libcamera supports both, it doesn't put any restriction in that area.
>> The pipeline handler (the device-specific code in libcamera that
>> configures and control the hardware pipeline) is responsible for
>> interfacing with the kernel drivers, and is free to use an MC-centric or
>> video-node-centric API depending on what the kernel drivers offer.
>>
>> The IPA (image processing algorithms) module is also vendor-specific.
>> Although it will not interface directly with kernel drivers, it will
>> have requirements on how fine-grained control of the sensor is required.
>> For systems that have an ISP in the SoC, reaching a high image quality
>> level requires fine-grained control of the sensor, or at the very least
>> being able to retrieve fine-grained sensor configuration information
>> from the kernel. For systems using a camera sensor with an integrated
>> ISP and a CSI-2 receiver without any further processing on the SoC side,
>> there will be no such fine-grained control of the sensor by the IPA (and
>> there could even be no IPA module at all).
>>
>> -- 
>> Regards,
>>
>> Laurent Pinchart

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ