[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <5dd06ff3-4fe7-af11-33ef-6dc9ed5fd8a5@linaro.org>
Date: Wed, 13 Feb 2019 14:51:40 +0200
From: Georgi Djakov <georgi.djakov@...aro.org>
To: Greg KH <gregkh@...uxfoundation.org>
Cc: jcrouse@...eaurora.org, robdclark@...il.com, evgreen@...omium.org,
freedreno@...ts.freedesktop.org, linux-arm-msm@...r.kernel.org,
linux-kernel@...r.kernel.org, linux-pm@...r.kernel.org
Subject: Re: [PATCH] drm/msm/a6xx: Add support for an interconnect path
Hi,
On 2/12/19 16:35, Greg KH wrote:
> On Tue, Feb 12, 2019 at 04:07:35PM +0200, Georgi Djakov wrote:
>> Hi Greg,
>>
>> On 2/12/19 12:16, Greg KH wrote:
>>> On Tue, Feb 12, 2019 at 11:52:38AM +0200, Georgi Djakov wrote:
>>>> From: Jordan Crouse <jcrouse@...eaurora.org>
>>>>
>>>> Try to get the interconnect path for the GPU and vote for the maximum
>>>> bandwidth to support all frequencies. This is needed for performance.
>>>> Later we will want to scale the bandwidth based on the frequency to
>>>> also optimize for power but that will require some device tree
>>>> infrastructure that does not yet exist.
>>>>
>>>> v6: use icc_set_bw() instead of icc_set()
>>>> v5: Remove hardcoded interconnect name and just use the default
>>>> v4: Don't use a port string at all to skip the need for names in the DT
>>>> v3: Use macros and change port string per Georgi Djakov
>>>>
>>>> Signed-off-by: Jordan Crouse <jcrouse@...eaurora.org>
>>>> Acked-by: Rob Clark <robdclark@...il.com>
>>>> Reviewed-by: Evan Green <evgreen@...omium.org>
>>>> Signed-off-by: Georgi Djakov <georgi.djakov@...aro.org>
>>>> ---
>>>>
>>>> Hi Greg,
>>>>
>>>> If not too late, could you please take this patch into char-misc-next.
>>>> It is adding the first consumer of the interconnect API. We are just
>>>> getting the code in place, without making it functional yet, as some
>>>> DT bits are still needed to actually enable it. We have Rob's Ack to
>>>> merge this together with the interconnect code. This patch has already
>>>> spent some time in linux-next without any issues.
>>>
>>> I have a question about the interconnect code. Last week I saw a
>>> presentation about the resctrl/RDT code from ARM that is coming (MPAM),
>>> and it really looks like the same functionality as this interconnect
>>> code. In fact, this code looks like the existing resctrl stuff, right?
>>
>> Thanks for the question! It's nice that MPAM is moving forward. When i
>> looked into the MPAM draft spec an year ago, it was an optional
>> extension mentioning mostly use-cases with VMs on server systems.
>>
>> But anyway, MPAM is only available for ARMv8.2+ cores as an optional
>> extension and aarch32 is not supported. In contrast to that, the
>> interconnect code is generic and does not put any limitations on the
>> platform/architecture that can use it - just the platform specific
>> implementation would be different. We have discussed in that past that
>> it can be used even on x86 platforms to provide hints to firmware.
>
> Yes, but resctrl is arch independant. It's not the "backend" that I'm
> concerned about, it's the userspace and in-kernel api that I worry
> about.
Agree that resctrl is now arch independent, but it looks to me that
resctrl serves for a different purpose. It may sound that the have
similarities related to bandwidth management, but they are completely
different and resctrl does not seem suitable for managing interconnects
on a system-on-chip systems. If i understand correctly, the resctrl is
about monitoring and controlling system resources like cache, memory
bandwidth, L2, L3, that are used by applications, VMs and containers in
a CPU centric approach. It does this by making use of some CPU hardware
features and expose this via a file-system to be controlled from userspace.
>>> So why shouldn't we just drop the interconnect code and use resctrl
>>> instead as it's already merged?
>>
>> I haven't seen any MPAM code so far, but i assume that we can have an
>> interconnect provider that implements this MPAM extension for systems
>> that support it (and want to use it). Currently there are people working
>> on various interconnect platform drivers from 5 different SoC vendors
>> and we have agreed to use a common DT bindings (and API). I doubt that
>> even a single one of these platforms is based on v8.2+. Probably such
>> SoCs would be coming in the future and then i expect people making use
>> of MPAM in some interconnect provider driver.
>
> Again, don't focus on MPAM as-is, it's the resctrl api that I would like
> to see explained why interconnect can't use.
While resctrl can work for managing for example a CPU to memory
bandwidth for processes, this is not enough for the bigger picture if
you have a whole system-on-chip topology with distributed hardware
components talking to each other. The "Linux" CPU might not be even the
central arbiter in such topologies. The interconnect code does not
support interaction with userspace.
Some reasons why the interconnect code can't use the resctrl api now:
- The distributed hardware components need to express their needs for
which interconnect path and how much bandwidth they want to use, in
order to tune the whole system's (system-on-chip) performance in the
most optimal state. The interconnect code does this with a
consumer-provider API.
- Custom aggregation of the consumer requests needs to be done based on
the topology. SoCs may use different aggregation formula, depending for
example on whether they are using a simple hierarchical bus, a crossbar
or some network-on-chip. When the NoC concept is used, there is
interleaved traffic that needs to be aggregated and an interconnect path
can span across multiple clock domains (allowing each functional unit to
have it's own clock domain).
- Support for complex topologies is needed - multi-tiered buses with
devices having multiple paths between each other. The device may choose
which path to use depending on the needs or use even multiple for
load-balancing.
- A topology changes should be supported with an API - there are FPGA
boards that can change their topology.
I looked at the existing resctrl code and at some in-flight patches, but
it doesn't feel right to me to use resctrl. I don't think much of it can
be reused or extended without very significant changes that would
probably twist resctrl away from it's original purpose (and maybe
conflict with it).
TL;DR: The functionality of resctrl and interconnect code is different -
resctrl seems to be about monitoring/managing/enforcing the usage of
resources shared by one or more CPUs, the interconnect code is more
about system-on-chip systems allowing various modules (GPU, DSP, modem,
WiFi, bluetooth, {en,de}coders, camera, ethernet, etc.) to express their
bandwidth needs in order to improve the power efficiency of the whole SoC.
Hope that this addressed your concerns.
Thanks,
Georgi
Powered by blists - more mailing lists