[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <75f1d8f8-84e8-e621-b91d-84b4d15edfa1@linaro.org>
Date: Wed, 29 Aug 2018 15:31:16 +0300
From: Georgi Djakov <georgi.djakov@...aro.org>
To: Maxime Ripard <maxime.ripard@...tlin.com>
Cc: Rob Herring <robh@...nel.org>, linux-pm@...r.kernel.org,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
"Rafael J. Wysocki" <rjw@...ysocki.net>,
Rob Herring <robh+dt@...nel.org>,
Mike Turquette <mturquette@...libre.com>, khilman@...libre.com,
Vincent Guittot <vincent.guittot@...aro.org>,
skannan@...eaurora.org,
Bjorn Andersson <bjorn.andersson@...aro.org>,
Amit Kucheria <amit.kucheria@...aro.org>,
seansw@....qualcomm.com, daidavid1@...eaurora.org,
evgreen@...omium.org, Mark Rutland <mark.rutland@....com>,
Lorenzo Pieralisi <lorenzo.pieralisi@....com>,
Alexandre Bailon <abailon@...libre.com>,
Arnd Bergmann <arnd@...db.de>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
linux-arm-kernel <linux-arm-kernel@...ts.infradead.org>,
linux-arm-msm@...r.kernel.org, devicetree@...r.kernel.org
Subject: Re: [PATCH v7 2/8] dt-bindings: Introduce interconnect provider
bindings
Hi Maxime,
On 08/27/2018 06:08 PM, Maxime Ripard wrote:
> Hi!
>
> On Fri, Aug 24, 2018 at 05:51:37PM +0300, Georgi Djakov wrote:
>> Hi Maxime,
>>
>> On 08/20/2018 06:32 PM, Maxime Ripard wrote:
>>> Hi Georgi,
>>>
>>> On Tue, Aug 07, 2018 at 05:54:38PM +0300, Georgi Djakov wrote:
>>>>> There is also a patch series from Maxime Ripard that's addressing the
>>>>> same general area. See "dt-bindings: Add a dma-parent property". We
>>>>> don't need multiple ways to address describing the device to memory
>>>>> paths, so you all had better work out a common solution.
>>>>
>>>> Looks like this fits exactly into the interconnect API concept. I see
>>>> MBUS as interconnect provider and display/camera as consumers, that
>>>> report their bandwidth needs. I am also planning to add support for
>>>> priority.
>>>
>>> Thanks for working on this. After looking at your serie, the one thing
>>> I'm a bit uncertain about (and the most important one to us) is how we
>>> would be able to tell through which interconnect the DMA are done.
>>>
>>> This is important to us since our topology is actually quite simple as
>>> you've seen, but the RAM is not mapped on that bus and on the CPU's,
>>> so we need to apply an offset to each buffer being DMA'd.
>>
>> Ok, i see - your problem is not about bandwidth scaling but about using
>> different memory ranges by the driver to access the same location.
>
> Well, it turns out that the problem we are bitten by at the moment is
> the memory range one, but the controller it goes through also provides
> bandwidth scaling, priorities and so on, so it's not too far off.
Thanks for the clarification. Alright, so this will fit nicely into the
model as a provider. I agree that we should try to use the same binding
to describe a path from a master to memory in DT.
>> So this is not really the same and your problem is different. Also
>> the interconnect bindings are describing a path and
>> endpoints. However i am open to any ideas.
>
> It's describing a path and endpoints, but it can describe multiple of
> them for the same device, right? If so, we'd need to provide
> additional information to distinguish which path is used for DMA.
Sure, multiple paths are supported.
BR,
Georgi
Powered by blists - more mailing lists