[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <ybsoposvudskkmzua5u33cq2jcstm7pzoklzutwazn2bvqobvo@pwsdlwtohcw3>
Date: Thu, 2 Oct 2025 15:28:27 +0200
From: Jacopo Mondi <jacopo.mondi@...asonboard.com>
To: Anthony McGivern <anthony.mcgivern@....com>
Cc: Nicolas Dufresne <nicolas.dufresne@...labora.com>,
Laurent Pinchart <laurent.pinchart@...asonboard.com>, Jacopo Mondi <jacopo.mondi@...asonboard.com>,
"bcm-kernel-feedback-list@...adcom.com" <bcm-kernel-feedback-list@...adcom.com>, "florian.fainelli@...adcom.com" <florian.fainelli@...adcom.com>,
"hverkuil@...nel.org" <hverkuil@...nel.org>, "kernel-list@...pberrypi.com" <kernel-list@...pberrypi.com>,
"Kieran Bingham (kieran.bingham@...asonboard.com)" <kieran.bingham@...asonboard.com>,
"linux-arm-kernel@...ts.infradead.org" <linux-arm-kernel@...ts.infradead.org>, "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"linux-media@...r.kernel.org" <linux-media@...r.kernel.org>,
"linux-rpi-kernel@...ts.infradead.org" <linux-rpi-kernel@...ts.infradead.org>, "m.szyprowski@...sung.com" <m.szyprowski@...sung.com>,
"mchehab@...nel.org" <mchehab@...nel.org>, "sakari.ailus@...ux.intel.com" <sakari.ailus@...ux.intel.com>,
"tfiga@...omium.org" <tfiga@...omium.org>,
"tomi.valkeinen@...asonboard.com" <tomi.valkeinen@...asonboard.com>
Subject: Re: [PATCH v2 12/27] media: v4l2-subdev: Introduce v4l2 subdev
context
Hi Anthony
thanks for the details
On Thu, Oct 02, 2025 at 08:42:56AM +0100, Anthony McGivern wrote:
>
> Hi all,
>
> On 30/09/2025 13:58, Nicolas Dufresne wrote:
> > Hi Laurent,
> >
> > Le mardi 30 septembre 2025 à 13:16 +0300, Laurent Pinchart a écrit :
> >> On Tue, Sep 30, 2025 at 11:53:39AM +0200, Jacopo Mondi wrote:
> >>> On Thu, Sep 25, 2025 at 09:26:56AM +0000, Anthony McGivern wrote:
> >>>> On Thu, Jul 24, 2025 at 16:10:19 +0200, Jacopo Mondi write:
> >>>>> Introduce a new type in v4l2 subdev that represents a v4l2 subdevice
> >>>>> contex. It extends 'struct media_entity_context' and is intended to be
> >>>>> extended by drivers that can store driver-specific information
> >>>>> in their derived types.
> >>>>>
> >>>>> Signed-off-by: Jacopo Mondi <jacopo.mondi@...asonboard.com>
> >>>>
> >>>> I am interested in how the sub-device context will handle the
> >>>> Streams API? Looking at the commits the
> >>>> v4l2_subdev_enable/disable_streams functions still appear to operate
> >>>> on the main sub-device only. I take it we would have additional
> >>>> context-aware functions here that can fetch the subdev state from
> >>>> the sub-device context, though I imagine some fields will have to be
> >>>> moved into the context such as s_stream_enabled, or even
> >>>> enabled_pads for non stream-aware drivers?
> >>>
> >>> mmm good question, I admit I might have not considered that part yet.
> >>>
> >>> Streams API should go in a soon as Sakari's long awaited series hits
> >>> mainline, and I will certainly need to rebase soon, so I'll probably
> >>> get back to this.
> >>>
> >>> Have you any idea about how this should be designed ?
>
> Hmm, while I haven't thought of a full implementation I did some testing
> where I added a v4l2_subdev_context_enable_streams and it's respective
> disable_streams. These would provide the v4l2_subdev_context so that
> when the subdev state was fetched it would retrieve it from the context.
> I think this would work with the streams API, however for drivers that don't
> support this it will not since the fields such as enabled_pads are located
> in the v4l2_subdev struct itself. Assuming these fields are only used in the
> V4L2 core (haven't checked this fully) potentially they could be moved into
> subdev state?
>
> There were some other areas that I found when trying to implement this
> in the driver we are working on, for example media_pad_remote_pad_unique()
> only uses the media_pad struct, meaning multi-context would not work here,
> atleast in the way I expected. Perhaps this is where we have some differing
> thoughts on how it would be used. See some details below about the driver we
> are working on.
>
> >>
> >> Multi-context is designed for memory to memory pipelines, as inline
> >> pipelines can't be time-multiplexed (at least not without very specific
> >> hardware designs that I haven't encountered in SoCs so far). In a
> >
> > I probably don't understand what you mean here, since I know you are well aware
> > of the ISP design on RK3588. It has two cores, which allow handling up to 2
> > sensors inline, but once you need more stream, you should have a way to
> > reconfigure the pipeline and use one or both cores in a m2m (multi-context)
> > fashion to extend its capability (balancing the resolutions and rate as usual).
> >
> > Perhaps you mean this specific case is already covered by the stream API
> > combined with other floating proposal ? I think most of us our missing the big
> > picture and just see organic proposals toward goals documented as un-related,
> > but that actually looks related.
> >
> > Nicolas
> >
> >> memory-to-memory pipeline I expect the .enable/disable_streams()
> >> operation to not do much, as the entities in the pipeline operate based
> >> on buffers being queued on the input and output video devices. We may
> >> still need to support this in the multi-context framework, depending on
> >> the needs of drivers.
> >>
> >> Anthony, could you perhaps share some information about the pipeline
> >> you're envisioning and the type of subdev that you think would cause
> >> concerns ?
>
> I am currently working on a driver for the Mali-C720 ISP. See the link
> below for the developer page relating to this for some details:
>
> https://developer.arm.com/Processors/Mali-C720AE
>
> To summarize, it is capable of supporting up to 16 sensors, either through
> streaming inputs or memory-to-memory modesn and uses a hardware context manager
Could you help me better grasp this part ? Can the device work in m2m and inline
mode at the same time ? IOW can you assign some of the input ports to
the streaming part and reserve other input ports for m2m ? I'm
interested in understanding which parts of the system is capable of
reading from memory and which part is instead fed from the CSI-2
receiver pipeline
> to schedule each context to be processed. There are four video inputs, each
> supporting four virtual channels. On the processing side, there are two parallel
Similar in spirit to the previous question: "each input supports 4 virtual
channels": does the 4 streams get demuxed to memory ? Or do they get
demuxed to internal bus connected to the processing pipes ?
> processing pipelines, one optimized for human vision and the other for computer
> vision. These feed into numerous output pipelines, including four crop+scaler
> pipes who can each independently select whether to use the HV or CV pipe as
> its input.
>
> As such, our driver has a multi-layer topology to facilitate this configurability.
What do you mean by multi-layer ? :)
> With some small changes to Libcamera I have all of the output pipelines implemented
> and the media graph is correctly configured, but we would like to update the driver
> to support multi-context.
Care to share a .dot representation of the media graph ?
>
> My understanding intially was each context could have it's own topology configured
> while using the same sub-devices. For example, context 0 may link our crop+scaler
> pipes to human vision, whereas context 1 uses computer vision. Similarly, our input
> sub-device uses internal routing to route from the desired sensor to it's context.
> It would by my thoughts that the input sub-device here would be shared across every
> context but could route the sensor data to the necessary contexts. With the current
> implementation, we make large use of the streams API and have many links to configure
> based on the usecase so in our case any multi-context integration would also need
> to support this.
>
Media link state and routing I think make sense in the perspective of
contexts. I still feel like for ISP pipelines we could do with just
media links, but routing can be used as well (and in facts we already
do in the C55 iirc). At this time there is no support in this series
for this simply because it's not a feature I need.
As Laurent said, the stream API are mostly designed to represent data
streams multiplexed on the same physical bus, with CSI-2 being the
main use case for now, and I admit I'm still not sure if and how they
have to be considered when operated with contexts.
My general rule of thumb to decide if a point in the pipeline should
be context aware or not is: "can its configuration change on a
per-frame bases ?". If yes, then it means it is designed to be
time-multiplex between different contexts. If not, maybe I'm
oversimplifying here, then there is no need to alternate its usage on
a per-context base and a properly designed link/routing setup should
do.
I've discussed yesterday with Micheal if contexts could also be used
for partitioning a graph (making sure two non-overlapping partitions
of the pipeline can be used at the same time by two different
applications). I guess you could, but that's not the primary target,
as if the pipeline is properly designed you should be able to properly
partition it using media links and routing.
Happy to discuss your use case in more detail though to make sure
that, even not all the required features are there in this first
version, we're not designing something that makes it impossible to
support them in future.
> Anthony
Powered by blists - more mailing lists