[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <b4fdf5eb-2bd1-834d-fb2b-6b86a74dd6fc@linaro.org>
Date: Fri, 7 Dec 2018 17:24:17 +0200
From: Georgi Djakov <georgi.djakov@...aro.org>
To: Rob Herring <robh+dt@...nel.org>
Cc: "open list:THERMAL" <linux-pm@...r.kernel.org>,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
"Rafael J. Wysocki" <rjw@...ysocki.net>,
Michael Turquette <mturquette@...libre.com>,
Kevin Hilman <khilman@...libre.com>,
Vincent Guittot <vincent.guittot@...aro.org>,
Saravana Kannan <skannan@...eaurora.org>,
Bjorn Andersson <bjorn.andersson@...aro.org>,
Amit Kucheria <amit.kucheria@...aro.org>,
seansw@....qualcomm.com, daidavid1@...eaurora.org,
Evan Green <evgreen@...omium.org>,
Mark Rutland <mark.rutland@....com>,
Lorenzo Pieralisi <lorenzo.pieralisi@....com>,
Alexandre Bailon <abailon@...libre.com>,
Maxime Ripard <maxime.ripard@...tlin.com>,
Arnd Bergmann <arnd@...db.de>,
Thierry Reding <thierry.reding@...il.com>,
ksitaraman@...dia.com, sanjayc@...dia.com,
devicetree@...r.kernel.org,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"moderated list:ARM/FREESCALE IMX / MXC ARM ARCHITECTURE"
<linux-arm-kernel@...ts.infradead.org>,
linux-arm-msm <linux-arm-msm@...r.kernel.org>,
linux-tegra@...r.kernel.org
Subject: Re: [PATCH v10 1/7] interconnect: Add generic on-chip interconnect
API
Hi Rob,
On 12/5/18 18:16, Rob Herring wrote:
> On Tue, Nov 27, 2018 at 12:03 PM Georgi Djakov <georgi.djakov@...aro.org> wrote:
>>
>> This patch introduces a new API to get requirements and configure the
>> interconnect buses across the entire chipset to fit with the current
>> demand.
>>
>> The API is using a consumer/provider-based model, where the providers are
>> the interconnect buses and the consumers could be various drivers.
>> The consumers request interconnect resources (path) between endpoints and
>> set the desired constraints on this data flow path. The providers receive
>> requests from consumers and aggregate these requests for all master-slave
>> pairs on that path. Then the providers configure each participating in the
>> topology node according to the requested data flow path, physical links and
>> constraints. The topology could be complicated and multi-tiered and is SoC
>> specific.
>>
>> Signed-off-by: Georgi Djakov <georgi.djakov@...aro.org>
>> Reviewed-by: Evan Green <evgreen@...omium.org>
>
> [...]
>
>> +struct icc_path *icc_get(struct device *dev, const int src_id,
>> + const int dst_id);
>> +void icc_put(struct icc_path *path);
>> +int icc_set(struct icc_path *path, u32 avg_bw, u32 peak_bw);
>
> icc_set() is very generic, but this function isn't easily extended to
> other parameters than bandwidth. Perhaps icc_set_bandwidth() instead.
> Then when you add some other setting, you just add a new function. Of
> course, if you wind up with a bunch of different parameters, then
> you'll probably need an atomic type interface where you test all the
> settings together and then commit them separately in one call. But
Thanks for the comment. We have already started some discussion with the
Nvidia guys about supporting also latency and priority. We can do a
structure based approach and pass a struct with all the bandwidth /
latency / priority stuff to a function like icc_set_extended(); It's
very early for any conclusions yet and for now we support only
bandwidth. The function name still can be easy changed with a follow-up
patch until one is using it yet. Let me think again.
> from a DT perspective, I certainly hope there are not lots of new
> settings folks want to add.
Regarding DT, all settings should go into the drivers for now and
later we can decide if something makes sense to be really in device-tree.
Thanks,
Georgi
Powered by blists - more mailing lists