[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <7e7c29a7-af04-04a8-cb76-0c406f8f855c@linaro.org>
Date: Tue, 14 Mar 2017 17:41:54 +0200
From: Georgi Djakov <georgi.djakov@...aro.org>
To: Rob Herring <robh@...nel.org>
Cc: linux-pm@...r.kernel.org, rjw@...ysocki.net,
gregkh@...uxfoundation.org, khilman@...libre.com,
mturquette@...libre.com, vincent.guittot@...aro.org,
skannan@...eaurora.org, sboyd@...eaurora.org,
andy.gross@...aro.org, seansw@....qualcomm.com,
davidai@...cinc.com, devicetree@...r.kernel.org,
linux-kernel@...r.kernel.org, linux-arm-kernel@...ts.infradead.org,
linux-arm-msm@...r.kernel.org
Subject: Re: [RFC v0 0/2] Introduce on-chip interconnect API
On 03/03/2017 08:21 AM, Rob Herring wrote:
> On Wed, Mar 01, 2017 at 08:22:33PM +0200, Georgi Djakov wrote:
>> Modern SoCs have multiple processors and various dedicated cores (video, gpu,
>> graphics, modem). These cores are talking to each other and can generate a lot
>> of data flowing through the on-chip interconnects. These interconnect buses
>> could form different topologies such as crossbar, point to point buses,
>> hierarchical buses or use the network-on-chip concept.
>>
>> These buses have been sized usually to handle use cases with high data
>> throughput but it is not necessary all the time and consume a lot of power.
>> Furthermore, the priority between masters can vary depending on the running
>> use case like video playback or cpu intensive tasks.
>>
>> Having an API to control the requirement of the system in term of bandwidth
>> and QoS, so we can adapt the interconnect configuration to match those by
>> scaling the frequencies, setting link priority and tuning QoS parameters.
>> This configuration can be a static, one-time operation done at boot for some
>> platforms or a dynamic set of operations that happen at run-time.
>>
>> This patchset introduce a new API to get the requirement and configure the
>> interconnect buses across the entire chipset to fit with the current demand.
>> The API is NOT for changing the performance of the endpoint devices, but only
>> the interconnect path in between them.
>>
>> The API is using a consumer/provider-based model, where the providers are
>> the interconnect controllers and the consumers could be various drivers.
>> The consumers request interconnect resources (path) to an endpoint and set
>> the desired constraints on this data flow path. The provider(s) receive
>> requests from consumers and aggregate these requests for all master-slave
>> pairs on that path. Then the providers configure each participating in the
>> topology node according to the requested data flow path, physical links and
>> constraints. The topology could be complicated and multi-tiered and is SoC
>> specific.
>>
>> Below is a simplified diagram of a real-world SoC topology. The interconnect
>> providers are the memory front-end and the NoCs.
>>
>> +----------------+ +----------------+
>> | HW Accelerator |--->| M NoC |<---------------+
>> +----------------+ +----------------+ |
>> | | +------------+
>> +-------------+ V +------+ | |
>> | +--------+ | PCIe | | |
>> | | Slaves | +------+ | |
>> | +--------+ | | C NoC |
>> V V | |
>> +------------------+ +------------------------+ | | +-----+
>> | |-->| |-->| |-->| CPU |
>> | |-->| |<--| | +-----+
>> | Memory | | S NoC | +------------+
>> | |<--| |---------+ |
>> | |<--| |<------+ | | +--------+
>> +------------------+ +------------------------+ | | +-->| Slaves |
>> ^ ^ ^ ^ | | +--------+
>> | | | | | V
>> +-----+ | +-----+ +-----+ +---------+ +----------------+ +--------+
>> | CPU | | | GPU | | DSP | | Masters |-->| P NoC |-->| Slaves |
>> +-----+ | +-----+ +-----+ +---------+ +----------------+ +--------+
>> |
>> +-------+
>> | Modem |
>> +-------+
>>
>> This RFC does not implement all features but only main skeleton to check the
>> validity of the proposal. Currently it only works with device-tree and platform
>> devices.
>>
>> TODO:
>> * Constraints are currently stored in internal data structure. Should PM QoS
>> be used instead?
>> * Rework the framework to not depend on DT as frameworks cannot be tied
>> directly to firmware interfaces. Add support for ACPI?
>
> I would start without DT even. You can always have the data you need in
> the kernel. This will be more flexible as you're not defining an ABI as
> this evolves. I think it will take some time to have consensus on how to
> represent the bus master view of buses/interconnects (It's been
> attempted before).
>
> Rob
>
Thanks for the comment and for discussing this off-line! As the main
concern here is to see a list of multiple platforms before we come
up with a common binding, i will convert this to initially use platform
data. Then later we will figure out what exactly to pull into DT.
BR,
Georgi
Powered by blists - more mailing lists