[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <6923d6ed-e357-b083-1830-8396d788efe5@linaro.org>
Date: Mon, 10 Dec 2018 12:18:17 +0200
From: Georgi Djakov <georgi.djakov@...aro.org>
To: "Rafael J. Wysocki" <rafael@...nel.org>,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>
Cc: evgreen@...omium.org, Linux PM <linux-pm@...r.kernel.org>,
"Rafael J. Wysocki" <rjw@...ysocki.net>,
Rob Herring <robh+dt@...nel.org>,
Michael Turquette <mturquette@...libre.com>,
Kevin Hilman <khilman@...libre.com>,
Vincent Guittot <vincent.guittot@...aro.org>,
Saravana Kannan <skannan@...eaurora.org>,
bjorn.andersson@...aro.org,
Amit Kucheria <amit.kucheria@...aro.org>,
seansw@....qualcomm.com, daidavid1@...eaurora.org,
Mark Rutland <mark.rutland@....com>,
Lorenzo Pieralisi <lorenzo.pieralisi@....com>,
abailon@...libre.com, maxime.ripard@...tlin.com,
Arnd Bergmann <arnd@...db.de>,
Thierry Reding <thierry.reding@...il.com>,
ksitaraman@...dia.com, sanjayc@...dia.com,
"devicetree@...r.kernel.org" <devicetree@...r.kernel.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Linux ARM <linux-arm-kernel@...ts.infradead.org>,
linux-arm-msm <linux-arm-msm@...r.kernel.org>,
linux-tegra@...r.kernel.org, Doug Anderson <dianders@...omium.org>
Subject: Re: [PATCH v10 0/8] Introduce on-chip interconnect API
Hi Rafael,
On 12/10/18 11:04, Rafael J. Wysocki wrote:
> On Thu, Dec 6, 2018 at 3:55 PM Greg KH <gregkh@...uxfoundation.org> wrote:
>>
>> On Wed, Dec 05, 2018 at 12:41:35PM -0800, Evan Green wrote:
>>> On Tue, Nov 27, 2018 at 10:03 AM Georgi Djakov <georgi.djakov@...aro.org> wrote:
>>>>
>>>> Modern SoCs have multiple processors and various dedicated cores (video, gpu,
>>>> graphics, modem). These cores are talking to each other and can generate a
>>>> lot of data flowing through the on-chip interconnects. These interconnect
>>>> buses could form different topologies such as crossbar, point to point buses,
>>>> hierarchical buses or use the network-on-chip concept.
>>>>
>>>> These buses have been sized usually to handle use cases with high data
>>>> throughput but it is not necessary all the time and consume a lot of power.
>>>> Furthermore, the priority between masters can vary depending on the running
>>>> use case like video playback or CPU intensive tasks.
>>>>
>>>> Having an API to control the requirement of the system in terms of bandwidth
>>>> and QoS, so we can adapt the interconnect configuration to match those by
>>>> scaling the frequencies, setting link priority and tuning QoS parameters.
>>>> This configuration can be a static, one-time operation done at boot for some
>>>> platforms or a dynamic set of operations that happen at run-time.
>>>>
>>>> This patchset introduce a new API to get the requirement and configure the
>>>> interconnect buses across the entire chipset to fit with the current demand.
>>>> The API is NOT for changing the performance of the endpoint devices, but only
>>>> the interconnect path in between them.
>>>
>>> For what it's worth, we are ready to land this in Chrome OS. I think
>>> this series has been very well discussed and reviewed, hasn't changed
>>> much in the last few spins, and is in good enough shape to use as a
>>> base for future patches. Georgi's also done a great job reaching out
>>> to other SoC vendors, and there appears to be enough consensus that
>>> this framework will be usable by more than just Qualcomm. There are
>>> also several drivers out on the list trying to add patches to use this
>>> framework, with more to come, so it made sense (to us) to get this
>>> base framework nailed down. In my experiments this is an important
>>> piece of the overall power management story, especially on systems
>>> that are mostly idle.
>>>
>>> I'll continue to track changes to this series and we will ultimately
>>> reconcile with whatever happens upstream, but I thought it was worth
>>> sending this note to express our "thumbs up" towards this framework.
>>
>> Looks like a v11 will be forthcoming, so I'll wait for that one to apply
>> it to the tree if all looks good.
>
> I'm honestly not sure if it is ready yet.
>
> New versions are coming on and on, which may make such an impression,
> but we had some discussion on it at the LPC and some serious questions
> were asked during it, for instance regarding the DT binding introduced
> here. I'm not sure how this particular issue has been addressed here,
> for example.
There have been no changes in bindings since v4 (other than squashing
consumer and provider bindings into a single patch and fixing typos).
The last DT comment was on v9 [1] where Rob wanted confirmation from
other SoC vendors that this works for them too. And now we have that
confirmation and there are patches posted on the list [2].
The second thing (also discussed at LPC) was about possible cases where
some consumer drivers can't calculate how much bandwidth they actually
need and how to address that. The proposal was to extend the OPP
bindings with one more property, but this is not part of this patchset.
It is a future step that needs more discussion on the mailing list. If a
driver really needs some bandwidth data now, it should be put into the
driver and not in DT. After we have enough consumers, we can discuss
again if it makes sense to extract something into DT or not.
Thanks,
Georgi
[1] https://lkml.org/lkml/2018/9/25/939
[2] https://lkml.org/lkml/2018/11/28/12
Powered by blists - more mailing lists