lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 13 Mar 2020 16:52:52 +0000
From:   Arnaud POULIQUEN <arnaud.pouliquen@...com>
To:     Suman Anna <s-anna@...com>,
        Bjorn Andersson <bjorn.andersson@...aro.org>,
        Loic PALLARDY <loic.pallardy@...com>
CC:     Mathieu Poirier <mathieu.poirier@...aro.org>,
        Tero Kristo <t-kristo@...com>,
        "linux-remoteproc@...r.kernel.org" <linux-remoteproc@...r.kernel.org>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: RE: [PATCH 1/2] remoteproc: fall back to using parent memory pool if
 no dedicated available

Hi Suman,

> -----Original Message-----
> From: Suman Anna <s-anna@...com>
> Sent: jeudi 5 mars 2020 23:41
> To: Bjorn Andersson <bjorn.andersson@...aro.org>; Loic PALLARDY
> <loic.pallardy@...com>
> Cc: Mathieu Poirier <mathieu.poirier@...aro.org>; Arnaud POULIQUEN
> <arnaud.pouliquen@...com>; Tero Kristo <t-kristo@...com>; linux-
> remoteproc@...r.kernel.org; linux-kernel@...r.kernel.org; Suman Anna
> <s-anna@...com>
> Subject: [PATCH 1/2] remoteproc: fall back to using parent memory pool if no
> dedicated available
> 
> From: Tero Kristo <t-kristo@...com>
> 
> In some cases, like with OMAP remoteproc, we are not creating dedicated
> memory pool for the virtio device. Instead, we use the same memory pool
> for all shared memories. The current virtio memory pool handling forces a
> split between these two, as a separate device is created for it, causing
> memory to be allocated from bad location if the dedicated pool is not
> available. Fix this by falling back to using the parent device memory pool if
> dedicated is not available.
> 
> Fixes: 086d08725d34 ("remoteproc: create vdev subdevice with specific dma
> memory pool")
> Signed-off-by: Tero Kristo <t-kristo@...com>
> Signed-off-by: Suman Anna <s-anna@...com>
> ---
>  drivers/remoteproc/remoteproc_virtio.c | 10 ++++++++++
>  1 file changed, 10 insertions(+)
> 
> diff --git a/drivers/remoteproc/remoteproc_virtio.c
> b/drivers/remoteproc/remoteproc_virtio.c
> index 8c07cb2ca8ba..4723ebe574b8 100644
> --- a/drivers/remoteproc/remoteproc_virtio.c
> +++ b/drivers/remoteproc/remoteproc_virtio.c
> @@ -368,6 +368,16 @@ int rproc_add_virtio_dev(struct rproc_vdev *rvdev,
> int id)
>  				goto out;
>  			}
>  		}
> +	} else {
> +		struct device_node *np = rproc->dev.parent->of_node;
> +
> +		/*
> +		 * If we don't have dedicated buffer, just attempt to
> +		 * re-assign the reserved memory from our parent.
> +		 * Failure is non-critical so don't check return value
> +		 * either.
> +		 */
> +		of_reserved_mem_device_init_by_idx(dev, np, 0);
>  	}
I aven't tested your patchset yet, but reviewing you code,  I wonder if you cannot declare your  memory pool
in your platform driver using  rproc_of_resm_mem_entry_init. Something like:
	struct device_node *mem_node;
	struct reserved_mem *rmem;

	mem_node = of_parse_phandle(dev->of_node, "memory-region", 0);
	rmem = of_reserved_mem_lookup(mem_node);
	mem = rproc_of_resm_mem_entry_init(dev, 0,
							   rmem->size,
							   rmem->base,
							   " vdev0buffer");

A main advantage of this implementation would be that the index of the memory region would not be hard coded to 0.

Regards,
Arnaud
> 
>  	/* Allocate virtio device */
> --
> 2.23.0

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ