lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date: Tue, 30 Apr 2024 10:39:00 +0530
From: Beleswar Prasad Padhi <b-padhi@...com>
To: Mathieu Poirier <mathieu.poirier@...aro.org>
CC: <andersson@...nel.org>, <s-anna@...com>,
        <linux-remoteproc@...r.kernel.org>, <linux-kernel@...r.kernel.org>,
        <u-kumar1@...com>, <nm@...com>, <devarsht@...com>, <hnagalla@...com>
Subject: Re: [EXTERNAL] Re: [PATCH v2 1/2] remoteproc: k3-r5: Wait for core0
 power-up before powering up core1

Hello,

On 26/04/24 22:39, Mathieu Poirier wrote:
> Good day, On Wed, Apr 24, 2024 at 06: 35: 03PM +0530, Beleswar Padhi wrote: >
> From: Apurva Nandan <a-nandan@ ti. com> > > PSC controller has a limitation that
> it can only power-up the second core > when the first core is in ON
> ZjQcmQRYFpfptBannerStart
> This message was sent from outside of Texas Instruments.
> Do not click links or open attachments unless you recognize the source of this
> email and know the content is safe. If you wish to report this message to IT
> Security, please forward the message as an attachment to phishing@...t.ti.com
> ZjQcmQRYFpfptBannerEnd
>
> Good day,
>
> On Wed, Apr 24, 2024 at 06:35:03PM +0530, Beleswar Padhi wrote:
> > From: Apurva Nandan <a-nandan@...com>
> > 
> > PSC controller has a limitation that it can only power-up the second core
> > when the first core is in ON state. Power-state for core0 should be equal
> > to or higher than core1, else the kernel is seen hanging during rproc
> > loading.
> > 
> > Make the powering up of cores sequential, by waiting for the current core
> > to power-up before proceeding to the next core, with a timeout of 2sec.
> > Add a wait queue event in k3_r5_cluster_rproc_init call, that will wait
> > for the current core to be released from reset before proceeding with the
> > next core.
> > 
> > Fixes: 6dedbd1d5443 ("remoteproc: k3-r5: Add a remoteproc driver for R5F subsystem")
> > 
> > Signed-off-by: Apurva Nandan <a-nandan@...com>
>
> You need to add your own SoB as well.
>
> > ---
> >  drivers/remoteproc/ti_k3_r5_remoteproc.c | 28 ++++++++++++++++++++++++
> >  1 file changed, 28 insertions(+)
> > 
> > diff --git a/drivers/remoteproc/ti_k3_r5_remoteproc.c b/drivers/remoteproc/ti_k3_r5_remoteproc.c
> > index ad3415a3851b..5a9bd5d4a2ea 100644
> > --- a/drivers/remoteproc/ti_k3_r5_remoteproc.c
> > +++ b/drivers/remoteproc/ti_k3_r5_remoteproc.c
> > @@ -103,12 +103,14 @@ struct k3_r5_soc_data {
> >   * @dev: cached device pointer
> >   * @mode: Mode to configure the Cluster - Split or LockStep
> >   * @cores: list of R5 cores within the cluster
> > + * @core_transition: wait queue to sync core state changes
> >   * @soc_data: SoC-specific feature data for a R5FSS
> >   */
> >  struct k3_r5_cluster {
> >  	struct device *dev;
> >  	enum cluster_mode mode;
> >  	struct list_head cores;
> > +	wait_queue_head_t core_transition;
> >  	const struct k3_r5_soc_data *soc_data;
> >  };
> >  
> > @@ -128,6 +130,7 @@ struct k3_r5_cluster {
> >   * @atcm_enable: flag to control ATCM enablement
> >   * @btcm_enable: flag to control BTCM enablement
> >   * @loczrama: flag to dictate which TCM is at device address 0x0
> > + * @released_from_reset: flag to signal when core is out of reset
> >   */
> >  struct k3_r5_core {
> >  	struct list_head elem;
> > @@ -144,6 +147,7 @@ struct k3_r5_core {
> >  	u32 atcm_enable;
> >  	u32 btcm_enable;
> >  	u32 loczrama;
> > +	bool released_from_reset;
> >  };
> >  
> >  /**
> > @@ -460,6 +464,8 @@ static int k3_r5_rproc_prepare(struct rproc *rproc)
> >  			ret);
> >  		return ret;
> >  	}
> > +	core->released_from_reset = true;
> > +	wake_up_interruptible(&cluster->core_transition);
> >  
> >  	/*
> >  	 * Newer IP revisions like on J7200 SoCs support h/w auto-initialization
> > @@ -1140,6 +1146,7 @@ static int k3_r5_rproc_configure_mode(struct k3_r5_rproc *kproc)
> >  		return ret;
> >  	}
> >  
> > +	core->released_from_reset = c_state;
>
> I understand why this is needed but it line could be very cryptic for people
> trying to understand this driver.  Please add a comment to describe what is
> happening here.
Thanks for the review. I will send v3 addressing these comments shortly!
>
> >  	ret = ti_sci_proc_get_status(core->tsp, &boot_vec, &cfg, &ctrl,
> >  				     &stat);
> >  	if (ret < 0) {
> > @@ -1280,6 +1287,26 @@ static int k3_r5_cluster_rproc_init(struct platform_device *pdev)
> >  		    cluster->mode == CLUSTER_MODE_SINGLECPU ||
> >  		    cluster->mode == CLUSTER_MODE_SINGLECORE)
> >  			break;
> > +
> > +		/*
> > +		 * R5 cores require to be powered on sequentially, core0
> > +		 * should be in higher power state than core1 in a cluster
> > +		 * So, wait for current core to power up before proceeding
> > +		 * to next core and put timeout of 2sec for each core.
> > +		 *
> > +		 * This waiting mechanism is necessary because
> > +		 * rproc_auto_boot_callback() for core1 can be called before
> > +		 * core0 due to thread execution order.
> > +		 */
> > +		ret = wait_event_interruptible_timeout(cluster->core_transition,
> > +						       core->released_from_reset,
> > +						       msecs_to_jiffies(2000));
> > +		if (ret <= 0) {
> > +			dev_err(dev,
> > +				"Timed out waiting for %s core to power up!\n",
> > +				rproc->name);
> > +			return ret;
> > +		}
> >  	}
> >  
> >  	return 0;
> > @@ -1709,6 +1736,7 @@ static int k3_r5_probe(struct platform_device *pdev)
> >  	cluster->dev = dev;
> >  	cluster->soc_data = data;
> >  	INIT_LIST_HEAD(&cluster->cores);
> > +	init_waitqueue_head(&cluster->core_transition);
> >  
> >  	ret = of_property_read_u32(np, "ti,cluster-mode", &cluster->mode);
> >  	if (ret < 0 && ret != -EINVAL) {
> > -- 
> > 2.34.1
> > 
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ