[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <BYAPR02MB44072A6C1ACAA0C459390895B5020@BYAPR02MB4407.namprd02.prod.outlook.com>
Date: Thu, 15 Oct 2020 18:31:20 +0000
From: Ben Levinsky <BLEVINSK@...inx.com>
To: "linux-remoteproc@...r.kernel.org" <linux-remoteproc@...r.kernel.org>
CC: "Ed T. Mooring" <emooring@...inx.com>,
Stefano Stabellini <stefanos@...inx.com>,
Michal Simek <michals@...inx.com>,
"devicetree@...r.kernel.org" <devicetree@...r.kernel.org>,
"michael.auchter@...com" <michael.auchter@...com>,
"mathieu.poirier@...aro.org" <mathieu.poirier@...aro.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Rob Herring <robh+dt@...nel.org>,
"linux-arm-kernel@...ts.infradead.org"
<linux-arm-kernel@...ts.infradead.org>
Subject: RE: RE: RE: [PATCH v18 5/5] remoteproc: Add initial zynqmp R5
remoteproc driver
Hi All,
> -----Original Message-----
> From: Michael Auchter <michael.auchter@...com>
> Sent: Tuesday, October 6, 2020 3:21 PM
> To: Ben Levinsky <BLEVINSK@...inx.com>
> Cc: Ed T. Mooring <emooring@...inx.com>; Stefano Stabellini
> <stefanos@...inx.com>; Michal Simek <michals@...inx.com>;
> devicetree@...r.kernel.org; mathieu.poirier@...aro.org; linux-
> remoteproc@...r.kernel.org; linux-kernel@...r.kernel.org;
> robh+dt@...nel.org; linux-arm-kernel@...ts.infradead.org
> Subject: Re: RE: RE: [PATCH v18 5/5] remoteproc: Add initial zynqmp R5
> remoteproc driver
>
> On Tue, Oct 06, 2020 at 09:46:38PM +0000, Ben Levinsky wrote:
> >
> >
> > > -----Original Message-----
> > > From: Michael Auchter <michael.auchter@...com>
> > > Sent: Tuesday, October 6, 2020 2:32 PM
> > > To: Ben Levinsky <BLEVINSK@...inx.com>
> > > Cc: Ed T. Mooring <emooring@...inx.com>; sunnyliangjy@...il.com;
> > > punit1.agrawal@...hiba.co.jp; Stefano Stabellini <stefanos@...inx.com>;
> > > Michal Simek <michals@...inx.com>; devicetree@...r.kernel.org;
> > > mathieu.poirier@...aro.org; linux-remoteproc@...r.kernel.org; linux-
> > > kernel@...r.kernel.org; robh+dt@...nel.org; linux-arm-
> > > kernel@...ts.infradead.org
> > > Subject: Re: RE: [PATCH v18 5/5] remoteproc: Add initial zynqmp R5
> > > remoteproc driver
> > >
> > > On Tue, Oct 06, 2020 at 07:15:49PM +0000, Ben Levinsky wrote:
> > > >
> > > > Hi Michael,
> > > >
> > > > Thanks for the review
> > > >
> > >
> > > < ... snip ... >
> > >
> > > > > > + z_rproc = rproc->priv;
> > > > > > + z_rproc->dev.release = zynqmp_r5_release;
> > > > >
> > > > > This is the only field of z_rproc->dev that's actually initialized, and
> > > > > this device is not registered with the core at all, so zynqmp_r5_release
> > > > > will never be called.
> > > > >
> > > > > Since it doesn't look like there's a need to create this additional
> > > > > device, I'd suggest:
> > > > > - Dropping the struct device from struct zynqmp_r5_rproc
> > > > > - Performing the necessary cleanup in the driver remove
> > > > > callback instead of trying to tie it to device release
> > > >
> > > > For the most part I agree. I believe the device is still needed for
> > > > the mailbox client setup.
> > > >
> > > > As the call to mbox_request_channel_byname() requires its own device
> > > > that has the corresponding child node with the corresponding
> > > > mbox-related properties.
> > > >
> > > > With that in mind, is it still ok to keep the device node?
> > >
> > > Ah, I see. Thanks for the clarification!
> > >
> > > Instead of manually dealing with the device node creation for the
> > > individual processors, perhaps it makes more sense to use
> > > devm_of_platform_populate() to create them. This is also consistent with
> > > the way the TI K3 R5F remoteproc driver does things.
> > >
> > > Cheers,
> > > Michael
> >
> > I've been working on this today for a way around it and found one that I
> think works with your initial suggestion,
> > - in z_rproc, change dev from struct device to struct device*
> > ^ the above is shown the usage thereof below. It is there for the
> mailbox setup.
> > - in driver probe:
> > - add list_head to keep track of each core's z_rproc and for the driver
> remove clean up
> > - in each core's probe (zynqmp_r5_probe) dothe following:
> >
> >
> > rproc_ptr = rproc_alloc(dev, dev_name(dev), &zynqmp_r5_rproc_ops,
> > NULL, sizeof(struct zynqmp_r5_rproc));
> > if (!rproc_ptr)
> > return -ENOMEM;
> > z_rproc = rproc_ptr->priv;
> > z_rproc->dt_node = node;
> > z_rproc->rproc = rproc_ptr;
> > z_rproc->dev = &rproc_ptr->dev;
> > z_rproc->dev->of_node = node;
> > where node is the specific R5 core's of_node/ Device tree node.
> >
> > the above preserves most of the mailbox setup code.
>
> I see how this works, but it feels a bit weird to me to be overriding
> the remoteproc dev's of_node ptr. Personally I find the
> devm_of_platform_populate() approach a bit less confusing.
>
> But, it's also not my call to make ;). Perhaps a remoteproc maintainer
> can chime in here.
>
> >
Ping for comments here.
I looked at the TI R5 remoteproc driver and from what I can see, it seems the crux of the line:
z_rproc->dev->of_node = node;
is as follows:
the TI driver only has 1 R5-related remoteproc node. But in this it has information for both cores so
the rproc_alloc's device that is passed in is sufficient for subsequent mailbox calls. This is because the device
here also has a device_node that has the mbox information.
The Xilinx driver differs in that while there is a cluster device tree node that has the remoteproc-related
Information, it ALSO has child R5 cores that have their own TCM bank and mbox information.
As a result of this difference the use of devm_of_populate would not remove the use of the line of code in question because the mailbox setup calls later on still require the device field to have a corresponding device tree node that
Has the mailbox information.
If it is desired to see the use of devm_of_populate and more close alignment to the TI driver that has been merged then the Xilinx R5 driver bindings can instead have the TCM bank info, memory-regions, meta-memory-regions into R5 core-specific lists which resembles how the TI R5 driver has R5 core-specific properties. At this point just trying to suss out some direction in this patch series.
Your feedback and review is much appreciated,
Ben
> >
> > With this, I have already successfully done the following in a v19 patch
> > - move all the previous driver release code to remove
> > - able to probe, start/stop r5, driver remove repeatedly
> >
> > Also, this mimics the TI R5 driver code as each core's rproc has a list_head
> and they have a structure for the cluster which among other things maintains
> a linked list of the cores' specific rproc information.
> >
> > Thanks
> > Ben
Powered by blists - more mailing lists