lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <413937a1-bf4c-4926-945f-7df39869f215@arm.com>
Date: Thu, 2 Oct 2025 08:42:56 +0100
From: Anthony McGivern <anthony.mcgivern@....com>
To: Nicolas Dufresne <nicolas.dufresne@...labora.com>,
 Laurent Pinchart <laurent.pinchart@...asonboard.com>,
 Jacopo Mondi <jacopo.mondi@...asonboard.com>
Cc: "bcm-kernel-feedback-list@...adcom.com"
 <bcm-kernel-feedback-list@...adcom.com>,
 "florian.fainelli@...adcom.com" <florian.fainelli@...adcom.com>,
 "hverkuil@...nel.org" <hverkuil@...nel.org>,
 "kernel-list@...pberrypi.com" <kernel-list@...pberrypi.com>,
 "Kieran Bingham (kieran.bingham@...asonboard.com)"
 <kieran.bingham@...asonboard.com>,
 "linux-arm-kernel@...ts.infradead.org"
 <linux-arm-kernel@...ts.infradead.org>,
 "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
 "linux-media@...r.kernel.org" <linux-media@...r.kernel.org>,
 "linux-rpi-kernel@...ts.infradead.org"
 <linux-rpi-kernel@...ts.infradead.org>,
 "m.szyprowski@...sung.com" <m.szyprowski@...sung.com>,
 "mchehab@...nel.org" <mchehab@...nel.org>,
 "sakari.ailus@...ux.intel.com" <sakari.ailus@...ux.intel.com>,
 "tfiga@...omium.org" <tfiga@...omium.org>,
 "tomi.valkeinen@...asonboard.com" <tomi.valkeinen@...asonboard.com>
Subject: Re: [PATCH v2 12/27] media: v4l2-subdev: Introduce v4l2 subdev
 context


Hi all,

On 30/09/2025 13:58, Nicolas Dufresne wrote:
> Hi Laurent,
> 
> Le mardi 30 septembre 2025 à 13:16 +0300, Laurent Pinchart a écrit :
>> On Tue, Sep 30, 2025 at 11:53:39AM +0200, Jacopo Mondi wrote:
>>> On Thu, Sep 25, 2025 at 09:26:56AM +0000, Anthony McGivern wrote:
>>>> On Thu, Jul 24, 2025 at 16:10:19 +0200, Jacopo Mondi write:
>>>>> Introduce a new type in v4l2 subdev that represents a v4l2 subdevice
>>>>> contex. It extends 'struct media_entity_context' and is intended to be
>>>>> extended by drivers that can store driver-specific information
>>>>> in their derived types.
>>>>>
>>>>> Signed-off-by: Jacopo Mondi <jacopo.mondi@...asonboard.com>
>>>>
>>>> I am interested in how the sub-device context will handle the
>>>> Streams API? Looking at the commits the
>>>> v4l2_subdev_enable/disable_streams functions still appear to operate
>>>> on the main sub-device only. I take it we would have additional
>>>> context-aware functions here that can fetch the subdev state from
>>>> the sub-device context, though I imagine some fields will have to be
>>>> moved into the context such as s_stream_enabled, or even
>>>> enabled_pads for non stream-aware drivers?
>>>
>>> mmm good question, I admit I might have not considered that part yet.
>>>
>>> Streams API should go in a soon as Sakari's long awaited series hits
>>> mainline, and I will certainly need to rebase soon, so I'll probably
>>> get back to this.
>>>
>>> Have you any idea about how this should be designed ?

Hmm, while I haven't thought of a full implementation I did some testing
where I added a v4l2_subdev_context_enable_streams and it's respective
disable_streams. These would provide the v4l2_subdev_context so that
when the subdev state was fetched it would retrieve it from the context.
I think this would work with the streams API, however for drivers that don't
support this it will not since the fields such as enabled_pads are located
in the v4l2_subdev struct itself. Assuming these fields are only used in the
V4L2 core (haven't checked this fully) potentially they could be moved into
subdev state?

There were some other areas that I found when trying to implement this
in the driver we are working on, for example media_pad_remote_pad_unique()
only uses the media_pad struct, meaning multi-context would not work here,
atleast in the way I expected. Perhaps this is where we have some differing
thoughts on how it would be used. See some details below about the driver we
are working on.

>>
>> Multi-context is designed for memory to memory pipelines, as inline
>> pipelines can't be time-multiplexed (at least not without very specific
>> hardware designs that I haven't encountered in SoCs so far). In a
> 
> I probably don't understand what you mean here, since I know you are well aware
> of the ISP design on RK3588. It has two cores, which allow handling up to 2
> sensors inline, but once you need more stream, you should have a way to
> reconfigure the pipeline and use one or both cores in a m2m (multi-context)
> fashion to extend its capability (balancing the resolutions and rate as usual).
> 
> Perhaps you mean this specific case is already covered by the stream API
> combined with other floating proposal ? I think most of us our missing the big
> picture and just see organic proposals toward goals documented as un-related,
> but that actually looks related.
> 
> Nicolas
> 
>> memory-to-memory pipeline I expect the .enable/disable_streams()
>> operation to not do much, as the entities in the pipeline operate based
>> on buffers being queued on the input and output video devices. We may
>> still need to support this in the multi-context framework, depending on
>> the needs of drivers.
>>
>> Anthony, could you perhaps share some information about the pipeline
>> you're envisioning and the type of subdev that you think would cause
>> concerns ?

I am currently working on a driver for the Mali-C720 ISP. See the link
below for the developer page relating to this for some details:

https://developer.arm.com/Processors/Mali-C720AE

To summarize, it is capable of supporting up to 16 sensors, either through
streaming inputs or memory-to-memory modesn and uses a hardware context manager
to schedule each context to be processed. There are four video inputs, each
supporting four virtual channels. On the processing side, there are two parallel
processing pipelines, one optimized for human vision and the other for computer
vision. These feed into numerous output pipelines, including four crop+scaler
pipes who can each independently select whether to use the HV or CV pipe as
its input.

As such, our driver has a multi-layer topology to facilitate this configurability.
With some small changes to Libcamera I have all of the output pipelines implemented
and the media graph is correctly configured, but we would like to update the driver
to support multi-context.

My understanding intially was each context could have it's own topology configured
while using the same sub-devices. For example, context 0 may link our crop+scaler
pipes to human vision, whereas context 1 uses computer vision. Similarly, our input
sub-device uses internal routing to route from the desired sensor to it's context.
It would by my thoughts that the input sub-device here would be shared across every
context but could route the sensor data to the necessary contexts. With the current
implementation, we make large use of the streams API and have many links to configure
based on the usecase so in our case any multi-context integration would also need
to support this.

Anthony

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ