lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <zmuytuvsjpe4rx7oak762onncax7ko5ljfzber3dsirrpbpvne@lr7t2ultlsdk>
Date: Mon, 22 Jul 2024 10:57:50 +0200
From: Jacopo Mondi <jacopo.mondi@...asonboard.com>
To: Changhuang Liang <changhuang.liang@...rfivetech.com>
Cc: Jacopo Mondi <jacopo.mondi@...asonboard.com>, 
	Mauro Carvalho Chehab <mchehab@...nel.org>, Maxime Ripard <mripard@...nel.org>, 
	Greg Kroah-Hartman <gregkh@...uxfoundation.org>, Hans Verkuil <hverkuil-cisco@...all.nl>, 
	Laurent Pinchart <laurent.pinchart@...asonboard.com>, Tomi Valkeinen <tomi.valkeinen+renesas@...asonboard.com>, 
	Jack Zhu <jack.zhu@...rfivetech.com>, Keith Zhao <keith.zhao@...rfivetech.com>, 
	Jayshri Pawar <jpawar@...ence.com>, Jai Luthra <j-luthra@...com>, 
	"linux-media@...r.kernel.org" <linux-media@...r.kernel.org>, "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>, 
	"linux-staging@...ts.linux.dev" <linux-staging@...ts.linux.dev>
Subject: Re: 回复: [PATCH v2 5/5] staging:
 media: starfive: Add system PM support

Hi Changhuang

On Fri, Jul 19, 2024 at 02:08:20AM GMT, Changhuang Liang wrote:
> Hi Jacopo,
>
> Thanks for your comments.
>
> >
> > Hi Changhuang
> >
> > On Wed, Jul 17, 2024 at 08:28:34PM GMT, Changhuang Liang wrote:
> > > This patch implements system suspend and system resume operation for
> > > StarFive Camera Subsystem. It supports hibernation during streaming
> > > and restarts streaming at system resume time.
> > >
> > > Signed-off-by: Changhuang Liang <changhuang.liang@...rfivetech.com>
> > > ---
> > >  .../staging/media/starfive/camss/stf-camss.c  | 49
> > > +++++++++++++++++++
> > >  1 file changed, 49 insertions(+)
> > >
> > > diff --git a/drivers/staging/media/starfive/camss/stf-camss.c
> > > b/drivers/staging/media/starfive/camss/stf-camss.c
> > > index fecd3e67c7a1..8dcd35aef69d 100644
> > > --- a/drivers/staging/media/starfive/camss/stf-camss.c
> > > +++ b/drivers/staging/media/starfive/camss/stf-camss.c
> > > @@ -416,10 +416,59 @@ static int __maybe_unused
> > stfcamss_runtime_resume(struct device *dev)
> > >  	return 0;
> > >  }
> > >
> > > +static int __maybe_unused stfcamss_suspend(struct device *dev) {
> > > +	struct stfcamss *stfcamss = dev_get_drvdata(dev);
> > > +	struct stfcamss_video *video;
> >
> > Can be declared inside the for loop
> >
> > > +	unsigned int i;
> > > +
> > > +	for (i = 0; i < STF_CAPTURE_NUM; ++i) {
> >
> > Likewise, if you like it, you can
> >
> >         for (unsigned int i...
> >
> > > +		video = &stfcamss->captures[i].video;
> > > +		if (video->vb2_q.streaming) {
> > > +			video->ops->stop_streaming(video);
> > > +			video->ops->flush_buffers(video, VB2_BUF_STATE_ERROR);
> > > +		}
> > > +	}
> > > +
> > > +	return pm_runtime_force_suspend(dev); }
> > > +
> > > +static int __maybe_unused stfcamss_resume(struct device *dev) {
> > > +	struct stfcamss *stfcamss = dev_get_drvdata(dev);
> > > +	struct stf_isp_dev *isp_dev = &stfcamss->isp_dev;
> > > +	struct v4l2_subdev_state *sd_state;
> > > +	struct stfcamss_video *video;
> > > +	unsigned int i;
> >
> > same here
> >
> > > +	int ret;
> > > +
> > > +	ret = pm_runtime_force_resume(dev);
> > > +	if (ret < 0) {
> > > +		dev_err(dev, "Failed to resume\n");
> > > +		return ret;
> > > +	}
> > > +
> > > +	sd_state = v4l2_subdev_lock_and_get_active_state(&isp_dev->subdev);
> > > +
> > > +	if (isp_dev->streaming)
> > > +		stf_isp_stream_on(isp_dev, sd_state);
> >
> > I was wondering if you shouldn't propagate start_streaming along the whole
> > pipline, but I presume the connected subdevs have to handle resuming
> > streaming after a system resume themselves ?
> >
>
> Currently our Camera subsystem contains ISP subdev , capture_raw video device, and capture_yuv
> video device. So you can see only one system PM hook can be used by them.
>

Sorry, maybe I was not clear (and I was probably confused as well).

You are right this is the main entry point for system sleep PM hooks

> >
> > > +
> > > +	v4l2_subdev_unlock_state(sd_state);
> > > +
> > > +	for (i = 0; i < STF_CAPTURE_NUM; ++i) {
> > > +		video = &stfcamss->captures[i].video;
> > > +		if (video->vb2_q.streaming)
> > > +			video->ops->start_streaming(video);

And here you propagate the start_streaming (and stop_streaming on
suspend) call to all your video devices.

I see your video devices propagating the s_stream call to their
'source_subdev'. And your ISP subdev doing the same in
'isp_set_stream()'.

According to the media graph in
Documentation/admin-guide/media/starfive_camss_graph.dot

your 'capture_yuv' video device is connected to your ISP, and your
'capture_raw' video device is connected to your 'CSI-RX' subdev.

If my understanding is correct, your CSI-RX subdev will receive 2
calls to s_stream() (one from the ISP subdev and one from the
'capture_raw' video device). Am I mistaken maybe ?

Also, if the CSI-RX subdev is already part of a capture pipeline, as
Tomi pointed out in his review of patch [2/5] it doesn't need to
implement handlers for system suspend/resume.


> >
> > You can use vb2_is_streaming() maybe.

I was suggesting to use vb2_is_streaming() instead of openly code

		if (video->vb2_q.streaming)

> > If the queue is streaming, do you need to keep a 'streaming' flag for the isp ?
> > Probably yes, as the ISP subdev is used by several video nodes ?
> >
>
> I set the "streaming" flag in PATCH 4, so it does not affect that even if several video
> nodes use it.

Yeah I was wondering if you could have saved manually tracking the
streaming state in the isp (re-reading the patches, where do you
actually use the 'streaming' flag in the ISP subdev ?) by tracking the
vb2_queue state.

>
> Regards,
> Changhuang

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ