[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <54DE22B4.7020807@ti.com>
Date: Fri, 13 Feb 2015 10:13:40 -0600
From: Suman Anna <s-anna@...com>
To: Ohad Ben-Cohen <ohad@...ery.com>
CC: Tony Lindgren <tony@...mide.com>,
Kevin Hilman <khilman@...aro.org>,
Dave Gerlach <d-gerlach@...com>, Robert Tivy <rtivy@...com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"linux-omap@...r.kernel.org" <linux-omap@...r.kernel.org>,
linux-arm <linux-arm-kernel@...ts.infradead.org>
Subject: Re: [PATCH v3 2/2] remoteproc: add support to handle internal memories
Ohad,
On 02/12/2015 11:20 PM, Ohad Ben-Cohen wrote:
> On Thu, Feb 12, 2015 at 10:54 PM, Suman Anna <s-anna@...com> wrote:
>> My original motivation was that it would only need to be added on
>> firmwares requiring support for loading into internal memories,
>> otherwise, these are something left to be managed by the software
>> running on the remote processor completely, and MPU will not even touch
>> them.
>
> Sure. But even if you guys will use this interface correctly, this
> patch essentially exposes ioremap to user space, which is something we
> generally want to avoid.
>
>> So, let me know if this is a NAK. If so, we have two options - one to go
>> the sram node model where each of them have to be defined separately,
>> and have a specific property in the rproc nodes to be able to get the
>> gen_pool handles. The other one is simply to define these as <reg> and
>> use devm_ioremap_resource() (so use DT for defining the regions instead
>> of a resource table entry).
>
> Any approach where these regions are defined explicitly really sounds
> better. If you could look into these two alternatives that would be
> great.
OK, will do. Meanwhile, can you pick up Patch 1, that is independent of
this patch.
regards
Suman
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists