[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <96ebbfe7-a332-4ee9-b3c2-9efd68bc3132@arm.com>
Date: Fri, 3 Oct 2025 13:21:02 +0100
From: Anthony McGivern <anthony.mcgivern@....com>
To: Jacopo Mondi <jacopo.mondi@...asonboard.com>
Cc: Nicolas Dufresne <nicolas.dufresne@...labora.com>,
Laurent Pinchart <laurent.pinchart@...asonboard.com>,
"bcm-kernel-feedback-list@...adcom.com"
<bcm-kernel-feedback-list@...adcom.com>,
"florian.fainelli@...adcom.com" <florian.fainelli@...adcom.com>,
"hverkuil@...nel.org" <hverkuil@...nel.org>,
"kernel-list@...pberrypi.com" <kernel-list@...pberrypi.com>,
"Kieran Bingham (kieran.bingham@...asonboard.com)"
<kieran.bingham@...asonboard.com>,
"linux-arm-kernel@...ts.infradead.org"
<linux-arm-kernel@...ts.infradead.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"linux-media@...r.kernel.org" <linux-media@...r.kernel.org>,
"linux-rpi-kernel@...ts.infradead.org"
<linux-rpi-kernel@...ts.infradead.org>,
"m.szyprowski@...sung.com" <m.szyprowski@...sung.com>,
"mchehab@...nel.org" <mchehab@...nel.org>,
"sakari.ailus@...ux.intel.com" <sakari.ailus@...ux.intel.com>,
"tfiga@...omium.org" <tfiga@...omium.org>,
"tomi.valkeinen@...asonboard.com" <tomi.valkeinen@...asonboard.com>
Subject: Re: [PATCH v2 12/27] media: v4l2-subdev: Introduce v4l2 subdev
context
Hi Jacopo,
On 02/10/2025 14:28, Jacopo Mondi wrote:
> Hi Anthony
> thanks for the details
>
> On Thu, Oct 02, 2025 at 08:42:56AM +0100, Anthony McGivern wrote:
>>
>> Hi all,
>>
>> On 30/09/2025 13:58, Nicolas Dufresne wrote:
>>> Hi Laurent,
>>>
>>> Le mardi 30 septembre 2025 à 13:16 +0300, Laurent Pinchart a écrit :
>>>> On Tue, Sep 30, 2025 at 11:53:39AM +0200, Jacopo Mondi wrote:
>>>>> On Thu, Sep 25, 2025 at 09:26:56AM +0000, Anthony McGivern wrote:
>>>>>> On Thu, Jul 24, 2025 at 16:10:19 +0200, Jacopo Mondi write:
>>>>>>> Introduce a new type in v4l2 subdev that represents a v4l2 subdevice
>>>>>>> contex. It extends 'struct media_entity_context' and is intended to be
>>>>>>> extended by drivers that can store driver-specific information
>>>>>>> in their derived types.
>>>>>>>
>>>>>>> Signed-off-by: Jacopo Mondi <jacopo.mondi@...asonboard.com>
>>>>>>
>>>>>> I am interested in how the sub-device context will handle the
>>>>>> Streams API? Looking at the commits the
>>>>>> v4l2_subdev_enable/disable_streams functions still appear to operate
>>>>>> on the main sub-device only. I take it we would have additional
>>>>>> context-aware functions here that can fetch the subdev state from
>>>>>> the sub-device context, though I imagine some fields will have to be
>>>>>> moved into the context such as s_stream_enabled, or even
>>>>>> enabled_pads for non stream-aware drivers?
>>>>>
>>>>> mmm good question, I admit I might have not considered that part yet.
>>>>>
>>>>> Streams API should go in a soon as Sakari's long awaited series hits
>>>>> mainline, and I will certainly need to rebase soon, so I'll probably
>>>>> get back to this.
>>>>>
>>>>> Have you any idea about how this should be designed ?
>>
>> Hmm, while I haven't thought of a full implementation I did some testing
>> where I added a v4l2_subdev_context_enable_streams and it's respective
>> disable_streams. These would provide the v4l2_subdev_context so that
>> when the subdev state was fetched it would retrieve it from the context.
>> I think this would work with the streams API, however for drivers that don't
>> support this it will not since the fields such as enabled_pads are located
>> in the v4l2_subdev struct itself. Assuming these fields are only used in the
>> V4L2 core (haven't checked this fully) potentially they could be moved into
>> subdev state?
>>
>> There were some other areas that I found when trying to implement this
>> in the driver we are working on, for example media_pad_remote_pad_unique()
>> only uses the media_pad struct, meaning multi-context would not work here,
>> atleast in the way I expected. Perhaps this is where we have some differing
>> thoughts on how it would be used. See some details below about the driver we
>> are working on.
>>
>>>>
>>>> Multi-context is designed for memory to memory pipelines, as inline
>>>> pipelines can't be time-multiplexed (at least not without very specific
>>>> hardware designs that I haven't encountered in SoCs so far). In a
>>>
>>> I probably don't understand what you mean here, since I know you are well aware
>>> of the ISP design on RK3588. It has two cores, which allow handling up to 2
>>> sensors inline, but once you need more stream, you should have a way to
>>> reconfigure the pipeline and use one or both cores in a m2m (multi-context)
>>> fashion to extend its capability (balancing the resolutions and rate as usual).
>>>
>>> Perhaps you mean this specific case is already covered by the stream API
>>> combined with other floating proposal ? I think most of us our missing the big
>>> picture and just see organic proposals toward goals documented as un-related,
>>> but that actually looks related.
>>>
>>> Nicolas
>>>
>>>> memory-to-memory pipeline I expect the .enable/disable_streams()
>>>> operation to not do much, as the entities in the pipeline operate based
>>>> on buffers being queued on the input and output video devices. We may
>>>> still need to support this in the multi-context framework, depending on
>>>> the needs of drivers.
>>>>
>>>> Anthony, could you perhaps share some information about the pipeline
>>>> you're envisioning and the type of subdev that you think would cause
>>>> concerns ?
>>
>> I am currently working on a driver for the Mali-C720 ISP. See the link
>> below for the developer page relating to this for some details:
>>
>> https://developer.arm.com/Processors/Mali-C720AE
>>
>> To summarize, it is capable of supporting up to 16 sensors, either through
>> streaming inputs or memory-to-memory modesn and uses a hardware context manager
>
> Could you help me better grasp this part ? Can the device work in m2m and inline
> mode at the same time ? IOW can you assign some of the input ports to
> the streaming part and reserve other input ports for m2m ? I'm
> interested in understanding which parts of the system is capable of
> reading from memory and which part is instead fed from the CSI-2
> receiver pipeline
Each context can run in either inline mode as you'd call it, or in m2m mode.
It would be perfectly valid to have one context connected to a sensor while
another simply takes frames from buffers.
The hardware has numerous raw/out buffer descriptors that we can reserve for
our contexts. The driver handles reserving descriptors to each context, at
which point we must configure them with fields such as data format,
resolution, etc. We also assign their addresses, which may come from buffers
allocated internally by the driver for inline sensors, or our vb2 queue from
our memory input v4l2 output device.
A context must be assigned an input, which in inline mode is our desired video
input id, or a raw buffer descriptor id in m2m mode.
For inline mode, we configure our video input with the appropriate data format
and resolution, and assign it a raw buffer descriptor (the one we reserved for
our context). The hardware will then write frames that arrive on this input to
those buffers, at which point the hardware will know this context is ready to
be scheduled.
For m2m, the driver writes the VB2 buffer address to the raw buffer descriptor,
then triggers the context to be ready for scheduling.
The hardware is then responsible for actually scheduling these contexts.
If desired a user can configure specific scheduling modes, though by default
we are using a first come, first serve approach.
Once scheduled, the hardware automatically reads from the context's assigned
raw buffer and injects it into the pipeline. At which point, each output
writes to their assigned output buffer descriptor, whose addresses are provided
by each capture device's vb2 queue.
>
>> to schedule each context to be processed. There are four video inputs, each
>> supporting four virtual channels. On the processing side, there are two parallel
>
> Similar in spirit to the previous question: "each input supports 4 virtual
> channels": does the 4 streams get demuxed to memory ? Or do they get
> demuxed to internal bus connected to the processing pipes ?
>
Yes, the hardware treats every stream as a virtual input, so 16
virtual inputs in total. Each is configured with their own raw buffer descriptor
and thus images are written to separate buffers.
>> processing pipelines, one optimized for human vision and the other for computer
>> vision. These feed into numerous output pipelines, including four crop+scaler
>> pipes who can each independently select whether to use the HV or CV pipe as
>> its input.
>>
>> As such, our driver has a multi-layer topology to facilitate this configurability.
>
> What do you mean by multi-layer ? :)
Perhaps my terminology is wrong here xD But the general idea is this:
Input
pipe
/\
/ \
HV CV
\ /
Outputs
The input pipe (not to be confused with the video inputs) is the first stage
of the processing pipeline. From here, the image can flow to both HV and CV in parallel.
At which point, the output pipelines can choose whether to use the image from the
human vision or computer vision pipe (mutually exclusive), and each output pipe can
choose independently (i.e. output 0 uses HV, while output 1 chooses CV). So I guess
I meant to say there are multiple layers in the media graph where links can be configured.
>
>> With some small changes to Libcamera I have all of the output pipelines implemented
>> and the media graph is correctly configured, but we would like to update the driver
>> to support multi-context.
>
> Care to share a .dot representation of the media graph ?
>
Sure, I can attach what we have in the current state. Ofcourse this doesn't show the
internal routes, one point being in the input sub-device which can route streams
from the 4 sink pads to the 16 possible source pads. An example here might be two
sensors sharing the same sink pad on different VCs, routing one to context 0 and
the other to context 1.
We do also make use of streams within the isp sub-device to handle some hardware
muxes that control the flow of data through the input pipeline.
Perhaps this is not the best approach, I elected to use this over controls as different
routes actually affect the format of the image data. All the routes on the isp sub-device
are immutable with downstream sub-devices selecting which of these mutually exclusive
routes they wish to use.
Just to point out the isp sub-device represents the input pipeline. I called it this
to try avoid confusion with the video inputs, and also as it acts as the main point
of controlling the context (i.e. stopping/starting the HW). Data will always flow
through this pipeline, whereas HV and CV may not always be in use.
digraph board {
rankdir=TB
n00000001 [
label="{{} | mali-c720 tpg 0\n/dev/v4l-subdev0 | {<port0> 0}}",
shape=Mrecord,
style=filled,
fillcolor=green
]
n00000001:port0 -> n00000009:port0 [style=dashed]
n00000001:port0 -> n00000009:port1 [style=dashed]
n00000001:port0 -> n00000009:port2 [style=dashed]
n00000001:port0 -> n00000009:port3 [style=dashed]
n00000003 [
label="{{} | mali-c720 tpg 1\n/dev/v4l-subdev1 | {<port0> 0}}",
shape=Mrecord,
style=filled,
fillcolor=green
]
n00000003:port0 -> n00000009:port0 [style=dashed]
n00000003:port0 -> n00000009:port1 [style=dashed]
n00000003:port0 -> n00000009:port2 [style=dashed]
n00000003:port0 -> n00000009:port3 [style=dashed]
n00000005 [
label="mali-c720 mem-input\n/dev/video1",
shape=box,
style=filled,
fillcolor=yellow
]
n00000005 -> n0000001e:port0 [style=dashed]
n00000009 [
label="{{<port0> 0 | <port1> 1 | <port2> 2 | <port3> 3} |
mali-c720 input\n/dev/v4l-subdev2 |
{<port4> 4 | <port5> 5 | <port6> 6 | <port7> 7 |
<port8> 8 | <port9> 9 | <port10> 10 | <port11> 11 |
<port12> 12 | <port13> 13 | <port14> 14 | <port15> 15 |
<port16> 16 | <port17> 17 | <port18> 18 | <port19> 19}}",
shape=Mrecord,
style=filled,
fillcolor=green
]
n00000009:port4 -> n0000001e:port0 [style=dashed]
n00000009:port5 -> n0000001e:port0 [style=dashed]
n00000009:port6 -> n0000001e:port0 [style=dashed]
n00000009:port7 -> n0000001e:port0 [style=dashed]
n00000009:port8 -> n0000001e:port0 [style=dashed]
n00000009:port9 -> n0000001e:port0 [style=dashed]
n00000009:port10 -> n0000001e:port0 [style=dashed]
n00000009:port11 -> n0000001e:port0 [style=dashed]
n00000009:port12 -> n0000001e:port0 [style=dashed]
n00000009:port13 -> n0000001e:port0 [style=dashed]
n00000009:port14 -> n0000001e:port0 [style=dashed]
n00000009:port15 -> n0000001e:port0 [style=dashed]
n00000009:port16 -> n0000001e:port0 [style=dashed]
n00000009:port17 -> n0000001e:port0 [style=dashed]
n00000009:port18 -> n0000001e:port0 [style=dashed]
n00000009:port19 -> n0000001e:port0 [style=dashed]
n0000001e [
label="{{<port0> 0 | <port1> 1} |
mali-c720 isp\n/dev/v4l-subdev3 |
{<port2> 2 | <port3> 3 | <port4> 4 |
<port5> 5 | <port6> 6}}",
shape=Mrecord,
style=filled,
fillcolor=green
]
n0000001e:port3 -> n00000026:port0
n0000001e:port4 -> n0000002a:port0
n0000001e:port5 -> n0000003e:port0
n0000001e:port6 -> n0000003e:port0 [style=dashed]
n0000001e:port6 -> n00000047:port0 [style=dashed]
n0000001e:port2 -> n0000007e
n00000026 [
label="{{<port0> 0} |
mali-c720 hv pipe\n/dev/v4l-subdev4 |
{<port1> 1 | <port2> 2}}",
shape=Mrecord,
style=filled,
fillcolor=green
]
n00000026:port1 -> n0000002e:port0
n00000026:port1 -> n00000032:port0
n00000026:port1 -> n00000036:port0
n00000026:port1 -> n0000003a:port0
n00000026:port2 -> n0000003e:port0 [style=dashed]
n00000026:port1 -> n00000047:port0
n0000002a [
label="{{<port0> 0} |
mali-c720 cv pipe\n/dev/v4l-subdev5 |
{<port1> 1 | <port2> 2}}",
shape=Mrecord,
style=filled,
fillcolor=green
]
n0000002a:port1 -> n0000002e:port0 [style=dashed]
n0000002a:port1 -> n00000032:port0 [style=dashed]
n0000002a:port1 -> n00000036:port0 [style=dashed]
n0000002a:port1 -> n0000003a:port0 [style=dashed]
n0000002a:port2 -> n0000003e:port0 [style=dashed]
n0000002a:port1 -> n00000047:port0 [style=dashed]
n0000002e [
label="{{<port0> 0} |
mali-c720 fr 0 pipe\n/dev/v4l-subdev6 |
{<port1> 1 | <port2> 2}}",
shape=Mrecord,
style=filled,
fillcolor=green
]
n0000002e:port1 -> n0000004a
n0000002e:port2 -> n0000004e
n0000002e:port1 -> n00000042:port0 [style=dashed]
n0000002e:port2 -> n00000042:port0 [style=dashed]
n00000032 [
label="{{<port0> 0} |
mali-c720 fr 1 pipe\n/dev/v4l-subdev7 |
{<port1> 1 | <port2> 2}}",
shape=Mrecord,
style=filled,
fillcolor=green
]
n00000032:port1 -> n00000052
n00000032:port2 -> n00000056
n00000032:port1 -> n00000042:port0 [style=dashed]
n00000032:port2 -> n00000042:port0 [style=dashed]
n00000036 [
label="{{<port0> 0} |
mali-c720 fr 2 pipe\n/dev/v4l-subdev8 |
{<port1> 1 | <port2> 2}}",
shape=Mrecord,
style=filled,
fillcolor=green
]
n00000036:port1 -> n0000005a
n00000036:port2 -> n0000005e
n00000036:port1 -> n00000042:port0 [style=dashed]
n00000036:port2 -> n00000042:port0 [style=dashed]
n0000003a [
label="{{<port0> 0} |
mali-c720 fr 3 pipe\n/dev/v4l-subdev9 |
{<port1> 1 | <port2> 2}}",
shape=Mrecord,
style=filled,
fillcolor=green
]
n0000003a:port1 -> n00000062
n0000003a:port2 -> n00000066
n0000003a:port1 -> n00000042:port0 [style=dashed]
n0000003a:port2 -> n00000042:port0 [style=dashed]
n0000003e [
label="{{<port0> 0} |
mali-c720 raw pipe\n/dev/v4l-subdev10 |
{<port1> 1 | <port2> 2}}",
shape=Mrecord,
style=filled,
fillcolor=green
]
n0000003e:port1 -> n00000076
n0000003e:port2 -> n00000042:port0 [style=dashed]
n00000042 [
label="{{<port0> 0} |
mali-c720 foveated pipe\n/dev/v4l-subdev11 |
{<port1> 1 | <port2> 2 | <port3> 3}}",
shape=Mrecord,
style=filled,
fillcolor=green
]
n00000042:port1 -> n0000006a
n00000042:port2 -> n0000006e
n00000042:port3 -> n00000072
n00000047 [
label="{{<port0> 0} |
mali-c720 pyramid pipe\n/dev/v4l-subdev12 |
{<port1> 1}}",
shape=Mrecord,
style=filled,
fillcolor=green
]
n00000047:port1 -> n0000007a
n0000004a [
label="mali-c720 fr0-rgb\n/dev/video2",
shape=box,
style=filled,
fillcolor=yellow
]
n0000004e [
label="mali-c720 fr0-yuv\n/dev/video3",
shape=box,
style=filled,
fillcolor=yellow
]
n00000052 [
label="mali-c720 fr1-rgb\n/dev/video4",
shape=box,
style=filled,
fillcolor=yellow
]
n00000056 [
label="mali-c720 fr1-yuv\n/dev/video5",
shape=box,
style=filled,
fillcolor=yellow
]
n0000005a [
label="mali-c720 fr2-rgb\n/dev/video6",
shape=box,
style=filled,
fillcolor=yellow
]
n0000005e [
label="mali-c720 fr2-yuv\n/dev/video7",
shape=box,
style=filled,
fillcolor=yellow
]
n00000062 [
label="mali-c720 fr3-rgb\n/dev/video8",
shape=box,
style=filled,
fillcolor=yellow
]
n00000066 [
label="mali-c720 fr3-yuv\n/dev/video9",
shape=box,
style=filled,
fillcolor=yellow
]
n0000006a [
label="mali-c720 fov-0\n/dev/video10",
shape=box,
style=filled,
fillcolor=yellow
]
n0000006e [
label="mali-c720 fov-1\n/dev/video11",
shape=box,
style=filled,
fillcolor=yellow
]
n00000072 [
label="mali-c720 fov-2\n/dev/video12",
shape=box,
style=filled,
fillcolor=yellow
]
n00000076 [
label="mali-c720 raw\n/dev/video13",
shape=box,
style=filled,
fillcolor=yellow
]
n0000007a [
label="mali-c720 pyramid\n/dev/video14",
shape=box,
style=filled,
fillcolor=yellow
]
n0000007e [
label="mali-c720 3a stats\n/dev/video15",
shape=box,
style=filled,
fillcolor=yellow
]
n00000082 [
label="mali-c720 3a params\n/dev/video16",
shape=box,
style=filled,
fillcolor=yellow
]
n00000082 -> n0000001e:port1
n0000010a [
label="{{<port0> 0} |
lte-csi2-rx\n/dev/v4l-subdev13 |
{<port1> 1}}",
shape=Mrecord,
style=filled,
fillcolor=green
]
n0000010a:port1 -> n00000009:port0
n0000010f [
label="{{} | ar0231 0-0010\n/dev/v4l-subdev14 | {<port0> 0}}",
shape=Mrecord,
style=filled,
fillcolor=green
]
n0000010f:port0 -> n0000010a:port0 [style=bold]
}
>>
>> My understanding intially was each context could have it's own topology configured
>> while using the same sub-devices. For example, context 0 may link our crop+scaler
>> pipes to human vision, whereas context 1 uses computer vision. Similarly, our input
>> sub-device uses internal routing to route from the desired sensor to it's context.
>> It would by my thoughts that the input sub-device here would be shared across every
>> context but could route the sensor data to the necessary contexts. With the current
>> implementation, we make large use of the streams API and have many links to configure
>> based on the usecase so in our case any multi-context integration would also need
>> to support this.
>>
>
> Media link state and routing I think make sense in the perspective of
> contexts. I still feel like for ISP pipelines we could do with just
> media links, but routing can be used as well (and in facts we already
> do in the C55 iirc). At this time there is no support in this series
> for this simply because it's not a feature I need.
>
> As Laurent said, the stream API are mostly designed to represent data
> streams multiplexed on the same physical bus, with CSI-2 being the
> main use case for now, and I admit I'm still not sure if and how they
> have to be considered when operated with contexts.
>
Technically the streams could only be used for our driver on the video inputs,
handling routing sensor inputs to the appropriate contexts. Though I haven't
thought of a better way to deal with the internal routing within the pipeline
for the hardware muxes, especially since they affect the data format and in some
cases the resolution.
> My general rule of thumb to decide if a point in the pipeline should
> be context aware or not is: "can its configuration change on a
> per-frame bases ?". If yes, then it means it is designed to be
> time-multiplex between different contexts. If not, maybe I'm
> oversimplifying here, then there is no need to alternate its usage on
> a per-context base and a properly designed link/routing setup should
> do.
>
In our case we can completely change the configuration of the ISP on
every frame including internal muxes, which outputs are in use, etc.
But of course it makes sense that not every ISP may support this
functionality.
Thanks,
Anthony
> I've discussed yesterday with Micheal if contexts could also be used
> for partitioning a graph (making sure two non-overlapping partitions
> of the pipeline can be used at the same time by two different
> applications). I guess you could, but that's not the primary target,
> as if the pipeline is properly designed you should be able to properly
> partition it using media links and routing.
>
> Happy to discuss your use case in more detail though to make sure
> that, even not all the required features are there in this first
> version, we're not designing something that makes it impossible to
> support them in future.
>
>> Anthony
Powered by blists - more mailing lists