[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20210315173724.GB1342614@xps15>
Date: Mon, 15 Mar 2021 11:37:24 -0600
From: Mathieu Poirier <mathieu.poirier@...aro.org>
To: Ben Levinsky <BLEVINSK@...inx.com>
Cc: "devicetree@...r.kernel.org" <devicetree@...r.kernel.org>,
"linux-remoteproc@...r.kernel.org" <linux-remoteproc@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"linux-arm-kernel@...ts.infradead.org"
<linux-arm-kernel@...ts.infradead.org>,
Michal Simek <michals@...inx.com>,
"Ed T. Mooring" <emooring@...inx.com>
Subject: Re: [PATCH v26 5/5] remoteproc: Add initial zynqmp R5 remoteproc
driver
On Thu, Mar 11, 2021 at 11:49:13PM +0000, Ben Levinsky wrote:
> Hi Mathieu
>
> -----Original Message-----
> From: Mathieu Poirier <mathieu.poirier@...aro.org>
> Date: Tuesday, March 9, 2021 at 8:53 AM
> To: Ben Levinsky <BLEVINSK@...inx.com>
> Cc: "devicetree@...r.kernel.org" <devicetree@...r.kernel.org>, "linux-remoteproc@...r.kernel.org" <linux-remoteproc@...r.kernel.org>, "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>, "linux-arm-kernel@...ts.infradead.org" <linux-arm-kernel@...ts.infradead.org>, Michal Simek <michals@...inx.com>
> Subject: Re: [PATCH v26 5/5] remoteproc: Add initial zynqmp R5 remoteproc driver
>
> [...]
>
> > +
> > +/**
> > + * zynqmp_r5_probe - Probes ZynqMP R5 processor device node
> > + * this is called for each individual R5 core to
> > + * set up mailbox, Xilinx platform manager unique ID,
> > + * add to rproc core
> > + *
> > + * @pdev: domain platform device for current R5 core
> > + * @node: pointer of the device node for current R5 core
> > + * @rpu_mode: mode to configure RPU, split or lockstep
> > + *
> > + * Return: 0 for success, negative value for failure.
> > + */
> > +static struct zynqmp_r5_rproc *zynqmp_r5_probe(struct platform_device *pdev,
> > + struct device_node *node,
> > + enum rpu_oper_mode rpu_mode)
> > +{
> > + int ret, num_banks;
> > + struct device *dev = &pdev->dev;
> > + struct rproc *rproc_ptr;
> > + struct zynqmp_r5_rproc *z_rproc;
> > + struct device_node *r5_node;
> > +
> > + /* Allocate remoteproc instance */
> > + rproc_ptr = devm_rproc_alloc(dev, dev_name(dev), &zynqmp_r5_rproc_ops,
> > + NULL, sizeof(struct zynqmp_r5_rproc));
> > + if (!rproc_ptr) {
> > + ret = -ENOMEM;
> > + goto error;
> > + }
> > +
> > + rproc_ptr->auto_boot = false;
> > + z_rproc = rproc_ptr->priv;
> > + z_rproc->rproc = rproc_ptr;
> > + r5_node = z_rproc->rproc->dev.parent->of_node;
> > +
> > + /* Set up DMA mask */
> > + ret = dma_set_coherent_mask(dev, DMA_BIT_MASK(32));
> > + if (ret)
> > + goto error;
> > +
> > + /* Get R5 power domain node */
> > + ret = of_property_read_u32(node, "power-domain", &z_rproc->pnode_id);
> > + if (ret)
> > + goto error;
> > +
> > + ret = r5_set_mode(z_rproc, rpu_mode);
> > + if (ret)
> > + goto error;
> > +
> > + if (of_property_read_bool(node, "mboxes")) {
> > + ret = zynqmp_r5_setup_mbox(z_rproc, node);
> > + if (ret)
> > + goto error;
> > + }
> > +
> > + /* go through TCM banks for r5 node */
> > + num_banks = of_count_phandle_with_args(r5_node, BANK_LIST_PROP, NULL);
> > + if (num_banks <= 0) {
> > + dev_err(dev, "need to specify TCM banks\n");
> > + ret = -EINVAL;
> > + goto error;
> > + }
> > +
> > + if (num_banks > NUM_SRAMS) {
> > + dev_err(dev, "max number of srams is %d. given: %d \r\n",
> > + NUM_SRAMS, num_banks);
> > + ret = -EINVAL;
> > + goto error;
> > + }
> > +
> > + /* construct collection of srams used by the current R5 core */
> > + for (; num_banks; num_banks--) {
> > + struct resource rsc;
> > + struct device_node *dt_node;
> > + resource_size_t size;
> > + int i;
> > +
> > + dt_node = of_parse_phandle(r5_node, BANK_LIST_PROP, i);
>
> Variable @i is not initialised but it is used as an index to retrieve a handle
> to the sram banks. That code _should_ have failed frequently or at least have
> yielded abnormal results often enough to be noticed. Why wasn't it the case?
>
> I will stop here for the moment.
>
> [Ben]
> Yes this should be initialized. The reason this got through is that as i defaults to 0 and the 0th bank housed the required data. the case where SRAMS that can be written to, 0xFFE20000 in this case of split mode and on R5-0, was not caught.
>
Here @i is a variable allocated on the stack and as such it is garanteed to be
garbage on initialisation - it will do anything but default to 0.
> Instead of i I will use
>
> sram_node = of_parse_phandle(node, BANK_LIST_PROP,
> num_banks - 1);
Do you have to start with the last bank? If memory serves me well it isn't the
case in the previous revisions. Why not go back to the implementation you had
in V25?
Powered by blists - more mailing lists