lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 18 Mar 2020 11:37:24 +0200
From:   Tero Kristo <t-kristo@...com>
To:     Arnaud POULIQUEN <arnaud.pouliquen@...com>,
        Suman Anna <s-anna@...com>,
        Bjorn Andersson <bjorn.andersson@...aro.org>,
        Loic PALLARDY <loic.pallardy@...com>
CC:     Mathieu Poirier <mathieu.poirier@...aro.org>,
        "linux-remoteproc@...r.kernel.org" <linux-remoteproc@...r.kernel.org>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 1/2] remoteproc: fall back to using parent memory pool if
 no dedicated available

On 13/03/2020 18:52, Arnaud POULIQUEN wrote:
> Hi Suman,
> 
>> -----Original Message-----
>> From: Suman Anna <s-anna@...com>
>> Sent: jeudi 5 mars 2020 23:41
>> To: Bjorn Andersson <bjorn.andersson@...aro.org>; Loic PALLARDY
>> <loic.pallardy@...com>
>> Cc: Mathieu Poirier <mathieu.poirier@...aro.org>; Arnaud POULIQUEN
>> <arnaud.pouliquen@...com>; Tero Kristo <t-kristo@...com>; linux-
>> remoteproc@...r.kernel.org; linux-kernel@...r.kernel.org; Suman Anna
>> <s-anna@...com>
>> Subject: [PATCH 1/2] remoteproc: fall back to using parent memory pool if no
>> dedicated available
>>
>> From: Tero Kristo <t-kristo@...com>
>>
>> In some cases, like with OMAP remoteproc, we are not creating dedicated
>> memory pool for the virtio device. Instead, we use the same memory pool
>> for all shared memories. The current virtio memory pool handling forces a
>> split between these two, as a separate device is created for it, causing
>> memory to be allocated from bad location if the dedicated pool is not
>> available. Fix this by falling back to using the parent device memory pool if
>> dedicated is not available.
>>
>> Fixes: 086d08725d34 ("remoteproc: create vdev subdevice with specific dma
>> memory pool")
>> Signed-off-by: Tero Kristo <t-kristo@...com>
>> Signed-off-by: Suman Anna <s-anna@...com>
>> ---
>>   drivers/remoteproc/remoteproc_virtio.c | 10 ++++++++++
>>   1 file changed, 10 insertions(+)
>>
>> diff --git a/drivers/remoteproc/remoteproc_virtio.c
>> b/drivers/remoteproc/remoteproc_virtio.c
>> index 8c07cb2ca8ba..4723ebe574b8 100644
>> --- a/drivers/remoteproc/remoteproc_virtio.c
>> +++ b/drivers/remoteproc/remoteproc_virtio.c
>> @@ -368,6 +368,16 @@ int rproc_add_virtio_dev(struct rproc_vdev *rvdev,
>> int id)
>>   				goto out;
>>   			}
>>   		}
>> +	} else {
>> +		struct device_node *np = rproc->dev.parent->of_node;
>> +
>> +		/*
>> +		 * If we don't have dedicated buffer, just attempt to
>> +		 * re-assign the reserved memory from our parent.
>> +		 * Failure is non-critical so don't check return value
>> +		 * either.
>> +		 */
>> +		of_reserved_mem_device_init_by_idx(dev, np, 0);
>>   	}
> I aven't tested your patchset yet, but reviewing you code,  I wonder if you cannot declare your  memory pool
> in your platform driver using  rproc_of_resm_mem_entry_init. Something like:
> 	struct device_node *mem_node;
> 	struct reserved_mem *rmem;
> 
> 	mem_node = of_parse_phandle(dev->of_node, "memory-region", 0);
> 	rmem = of_reserved_mem_lookup(mem_node);
> 	mem = rproc_of_resm_mem_entry_init(dev, 0,
> 							   rmem->size,
> 							   rmem->base,
> 							   " vdev0buffer");
> 
> A main advantage of this implementation would be that the index of the memory region would not be hard coded to 0.

It seems like that would work for us also, and thus this patch can be 
dropped. See the following patch. Suman, any comments on this? If this 
seems acceptable, I can send this as a proper patch to the list.

------

From: Tero Kristo <t-kristo@...com>
Date: Wed, 18 Mar 2020 11:22:58 +0200
Subject: [PATCH] remoteproc/omap: Allocate vdev0buffer memory from
  reserved memory pool

Since 086d08725d34 ("remoteproc: create vdev subdevice with specific dma
memory pool"), remoteprocs must allocate separate vdev memory buffer. As
OMAP remoteproc does not do this yet, the memory gets allocated from
default DMA pool, and this memory is not suitable for the use. To fix
the issue, map the vdev0buffer to use the same device reserved memory
pool as the rest of the remoteproc.

Signed-off-by: Tero Kristo <t-kristo@...com>
---
  drivers/remoteproc/omap_remoteproc.c | 16 ++++++++++++++++
  1 file changed, 16 insertions(+)

diff --git a/drivers/remoteproc/omap_remoteproc.c 
b/drivers/remoteproc/omap_remoteproc.c
index 29d19a608af8..024330e31a9e 100644
--- a/drivers/remoteproc/omap_remoteproc.c
+++ b/drivers/remoteproc/omap_remoteproc.c
@@ -1273,6 +1273,9 @@ static int omap_rproc_probe(struct platform_device 
*pdev)
  	const char *firmware;
  	int ret;
  	struct reset_control *reset;
+	struct device_node *mem_node;
+	struct reserved_mem *rmem;
+	struct rproc_mem_entry *mem;

  	if (!np) {
  		dev_err(&pdev->dev, "only DT-based devices are supported\n");
@@ -1335,6 +1338,19 @@ static int omap_rproc_probe(struct 
platform_device *pdev)
  		dev_warn(&pdev->dev, "device does not have specific CMA pool.\n");
  		dev_warn(&pdev->dev, "Typically this should be provided,\n");
  		dev_warn(&pdev->dev, "only omit if you know what you are doing.\n");
+	} else {
+		mem_node = of_parse_phandle(pdev->dev.of_node, "memory-region",
+					    0);
+		rmem = of_reserved_mem_lookup(mem_node);
+		mem = rproc_of_resm_mem_entry_init(&pdev->dev, 0, rmem->size,
+						   rmem->base, "vdev0buffer");
+
+		if (!mem) {
+			ret = -ENOMEM;
+			goto release_mem;
+		}
+
+		rproc_add_carveout(rproc, mem);
  	}

  	platform_set_drvdata(pdev, rproc);
-- 
2.17.1
--
Texas Instruments Finland Oy, Porkkalankatu 22, 00180 Helsinki. Y-tunnus/Business ID: 0615521-4. Kotipaikka/Domicile: Helsinki

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ