[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200325203812.GA9384@xps15>
Date: Wed, 25 Mar 2020 14:38:12 -0600
From: Mathieu Poirier <mathieu.poirier@...aro.org>
To: Suman Anna <s-anna@...com>
Cc: Bjorn Andersson <bjorn.andersson@...aro.org>,
Loic Pallardy <loic.pallardy@...com>,
Arnaud Pouliquen <arnaud.pouliquen@...com>,
Tero Kristo <t-kristo@...com>,
linux-remoteproc@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2 1/2] remoteproc: fall back to using parent memory pool
if no dedicated available
On Thu, Mar 19, 2020 at 11:23:20AM -0500, Suman Anna wrote:
> From: Tero Kristo <t-kristo@...com>
>
> In some cases, like with OMAP remoteproc, we are not creating dedicated
> memory pool for the virtio device. Instead, we use the same memory pool
> for all shared memories. The current virtio memory pool handling forces
> a split between these two, as a separate device is created for it,
> causing memory to be allocated from bad location if the dedicated pool
> is not available. Fix this by falling back to using the parent device
> memory pool if dedicated is not available.
>
> Fixes: 086d08725d34 ("remoteproc: create vdev subdevice with specific dma memory pool")
> Signed-off-by: Tero Kristo <t-kristo@...com>
> Signed-off-by: Suman Anna <s-anna@...com>
> ---
> v2:
> - Address Arnaud's concerns about hard-coded memory-region index 0
> - Update the comment around the new code addition
> v1: https://patchwork.kernel.org/patch/11422721/
>
> drivers/remoteproc/remoteproc_virtio.c | 15 +++++++++++++++
> include/linux/remoteproc.h | 2 ++
> 2 files changed, 17 insertions(+)
>
> diff --git a/drivers/remoteproc/remoteproc_virtio.c b/drivers/remoteproc/remoteproc_virtio.c
> index eb817132bc5f..b687715cdf4b 100644
> --- a/drivers/remoteproc/remoteproc_virtio.c
> +++ b/drivers/remoteproc/remoteproc_virtio.c
> @@ -369,6 +369,21 @@ int rproc_add_virtio_dev(struct rproc_vdev *rvdev, int id)
> goto out;
> }
> }
> + } else {
> + struct device_node *np = rproc->dev.parent->of_node;
> +
> + /*
> + * If we don't have dedicated buffer, just attempt to re-assign
> + * the reserved memory from our parent. A default memory-region
> + * at index 0 from the parent's memory-regions is assigned for
> + * the rvdev dev to allocate from, and this can be customized
> + * by updating the vdevbuf_mem_id in platform drivers if
> + * desired. Failure is non-critical and the allocations will
> + * fall back to global pools, so don't check return value
> + * either.
I'm perplex... In the changelog it is indicated that if a memory pool is
not dedicated allocation happens from a bad location but here failure of
getting a hold of a dedicated memory pool is not critical.
> + */
> + of_reserved_mem_device_init_by_idx(dev, np,
> + rproc->vdevbuf_mem_id);
I wonder if using an index setup by platform code is really the best way
forward when we already have the carveout mechanic available to us. I see the
platform code adding a carveout that would have the same name as rproc->name.
>From there in rproc_add_virtio_dev() we could have something like:
mem = rproc_find_carveout_by_name(rproc, "%s", rproc->name);
That would be very flexible, the location of the reserved memory withing the
memory-region could change without fear of breaking things and no need to add to
struct rproc.
Let me know what you think.
Thanks,
Mathieu
> }
>
> /* Allocate virtio device */
> diff --git a/include/linux/remoteproc.h b/include/linux/remoteproc.h
> index ed127b2d35ca..07bd73a6d72a 100644
> --- a/include/linux/remoteproc.h
> +++ b/include/linux/remoteproc.h
> @@ -481,6 +481,7 @@ struct rproc_dump_segment {
> * @auto_boot: flag to indicate if remote processor should be auto-started
> * @dump_segments: list of segments in the firmware
> * @nb_vdev: number of vdev currently handled by rproc
> + * @vdevbuf_mem_id: default memory-region index for allocating vdev buffers
> */
> struct rproc {
> struct list_head node;
> @@ -514,6 +515,7 @@ struct rproc {
> bool auto_boot;
> struct list_head dump_segments;
> int nb_vdev;
> + u8 vdevbuf_mem_id;
> u8 elf_class;
> };
>
> --
> 2.23.0
>
Powered by blists - more mailing lists