lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 27 Jul 2020 08:11:09 +0000
From:   Peng Fan <peng.fan@....com>
To:     Oleksij Rempel <o.rempel@...gutronix.de>
CC:     "bjorn.andersson@...aro.org" <bjorn.andersson@...aro.org>,
        "mathieu.poirier@...aro.org" <mathieu.poirier@...aro.org>,
        "robh+dt@...nel.org" <robh+dt@...nel.org>,
        "linux-remoteproc@...r.kernel.org" <linux-remoteproc@...r.kernel.org>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        "shawnguo@...nel.org" <shawnguo@...nel.org>,
        "s.hauer@...gutronix.de" <s.hauer@...gutronix.de>,
        "kernel@...gutronix.de" <kernel@...gutronix.de>,
        "festevam@...il.com" <festevam@...il.com>,
        dl-linux-imx <linux-imx@....com>,
        "linux-arm-kernel@...ts.infradead.org" 
        <linux-arm-kernel@...ts.infradead.org>,
        "devicetree@...r.kernel.org" <devicetree@...r.kernel.org>
Subject: RE: [PATCH 03/10] remoteproc: imx: use devm_ioremap

> Subject: Re: [PATCH 03/10] remoteproc: imx: use devm_ioremap
> 
> On Mon, Jul 27, 2020 at 06:51:00AM +0000, Peng Fan wrote:
> > > Subject: Re: [PATCH 03/10] remoteproc: imx: use devm_ioremap
> > >
> > > On Mon, Jul 27, 2020 at 06:28:20AM +0000, Peng Fan wrote:
> > > > Hi Oleksij,
> > > >
> > > > > Subject: Re: [PATCH 03/10] remoteproc: imx: use devm_ioremap
> > > > >
> > > > > On Fri, Jul 24, 2020 at 04:08:06PM +0800, Peng Fan wrote:
> > > > > > We might need to map an region multiple times, becaue the
> > > > > > region might be shared between remote processors, such i.MX8QM
> > > > > > with dual
> > > M4 cores.
> > > > > > So use devm_ioremap, not devm_ioremap_resource.
> > > > >
> > > > > Can you please give an example of this kind of shared resources
> > > > > and how they should be handled by two separate devices?
> > > >
> > > > This is to share vdevbuffer space, there is a vdevbuffer in device
> > > > tree, it will be shared between M4_0 and M4_1.
> > > >
> > > > For the buffer, it is Linux DMA API will handle the space.
> > >
> > > Why remoteproc need to care about it? If I see it correctly, from
> > > the linux perspective, it is one buffer and one driver is
> > > responsible for it. Or do I missing some thing?
> >
> > We not have the vdev buffer in resource table, so I added in device tree, see
> below:
> 
> Hm.. if vdev is not in resource table and should not be controlled by
> remoteproc, why do we need remoteproc?

I use same approach as stm32 rproc driver.

The resource table here only publish vring address.

> 
> >         imx8qm_cm40: imx8qm_cm4@0 {
> >                 compatible = "fsl,imx8qm-cm4";
> >                 rsc-da = <0x90000000>;
> >                 mbox-names = "tx", "rx", "rxdb";
> >                 mboxes = <&lsio_mu5 0 1
> >                           &lsio_mu5 1 1
> >                           &lsio_mu5 3 1>;
> >                 mub-partition = <3>;
> >                 memory-region = <&vdev0vring0>, <&vdev0vring1>,
> <&vdevbuffer>,
> >                                 <&vdev1vring0>, <&vdev1vring1>;
> >                 core-index = <0>;
> >                 core-id = <IMX_SC_R_M4_0_PID0>;
> >                 status = "okay";
> >                 power-domains = <&pd IMX_SC_R_M4_0_PID0>,
> >                                 <&pd IMX_SC_R_M4_0_MU_1A>;
> >         };
> >
> >         imx8qm_cm41: imx8x_cm4@1 {
> >                 compatible = "fsl,imx8qm-cm4";
> >                 rsc-da = <0x90100000>;
> >                 mbox-names = "tx", "rx", "rxdb";
> >                 mboxes = <&lsio_mu6 0 1
> >                           &lsio_mu6 1 1
> >                           &lsio_mu6 3 1>;
> >                 mub-partition = <4>;
> >                 memory-region = <&vdev2vring0>, <&vdev2vring1>,
> <&vdevbuffer>,
> >                                 <&vdev3vring0>, <&vdev3vring1>;
> >                 core-index = <1>;
> >                 core-id = <IMX_SC_R_M4_1_PID0>;
> >                 status = "okay";
> >                 power-domains = <&pd IMX_SC_R_M4_1_PID0>,
> >                                 <&pd IMX_SC_R_M4_1_MU_1A>;
> >         };
> >
> >                 vdevbuffer: vdevbuffer {
> >                         compatible = "shared-dma-pool";
> >                         reg = <0 0x90400000 0 0x100000>;
> >                         no-map;
> >                 };
> >
> > I have the upper vdevbuffer node shared between M40 and M41 node.
> > The vdevbuffer will be used as virtio data buffer.
> >
> > And I have the following in rproc_add_virtio_dev to share vdevbuffer:
> >         /* Try to find dedicated vdev buffer carveout */
> >         mem = rproc_find_carveout_by_name(rproc, "vdev%dbuffer",
> rvdev->index);
> >         if (!mem)
> >                 mem = rproc_find_carveout_by_name(rproc,
> > "vdevbuffer");
> 
> With kernel v5.8-rc7 i get following call chain:

Please use Linux-next which has support of M4 booted before Linux in
in remoteproc.

> rproc_boot()
>   rproc_fw_boot()
>     rproc_handle_vdev
>       rproc_vdev_do_start()
>         rproc_add_virtio_dev()
> 
> 
> So, at the end, we will call rproc_add_virtio_dev() only if we boot firmware by
> linux, or if we get at least the resource table.


Resource table could be got from elf file if it is booted by Linux, or got from
an address if M4 is booted before Linux.

Thanks,
Peng.

> 
> Since none of this seems to be the case, i still do not understand how it should
> work.
> 
> > Hope this is clear.
> 
> :) i still need some time to understand it.
> 
> --
> Pengutronix e.K.                           |
> |
> Steuerwalder Str. 21                       |
> http://www.pengutronix.de/  |
> 31137 Hildesheim, Germany                  | Phone:
> +49-5121-206917-0    |
> Amtsgericht Hildesheim, HRA 2686           | Fax:
> +49-5121-206917-5555 |

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ