[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <83a3f33b-3695-2a40-1c2b-5c38d117c1ad@linaro.org>
Date: Wed, 18 Nov 2020 17:30:22 +0200
From: Georgi Djakov <georgi.djakov@...aro.org>
To: Dmitry Osipenko <digetx@...il.com>,
Thierry Reding <thierry.reding@...il.com>,
Jonathan Hunter <jonathanh@...dia.com>,
Rob Herring <robh+dt@...nel.org>,
Michael Turquette <mturquette@...libre.com>,
Stephen Boyd <sboyd@...nel.org>,
Peter De Schrijver <pdeschrijver@...dia.com>,
MyungJoo Ham <myungjoo.ham@...sung.com>,
Kyungmin Park <kyungmin.park@...sung.com>,
Chanwoo Choi <cw00.choi@...sung.com>,
Mikko Perttunen <cyndis@...si.fi>,
Viresh Kumar <vireshk@...nel.org>,
Peter Geis <pgwipeout@...il.com>,
Nicolas Chauvet <kwizart@...il.com>,
Krzysztof Kozlowski <krzk@...nel.org>
Cc: linux-tegra@...r.kernel.org, linux-pm@...r.kernel.org,
linux-kernel@...r.kernel.org, dri-devel@...ts.freedesktop.org
Subject: Re: [PATCH v9 01/17] memory: tegra30: Support interconnect framework
On 18.11.20 0:02, Dmitry Osipenko wrote:
> 17.11.2020 23:24, Georgi Djakov пишет:
>> Hi Dmitry,
>>
>> Thank you working on this!
>>
>> On 15.11.20 23:29, Dmitry Osipenko wrote:
>>> Now Internal and External memory controllers are memory interconnection
>>> providers. This allows us to use interconnect API for tuning of memory
>>> configuration. EMC driver now supports OPPs and DVFS. MC driver now
>>> supports tuning of memory arbitration latency, which needs to be done
>>> for ISO memory clients, like a Display client for example.
>>>
>>> Tested-by: Peter Geis <pgwipeout@...il.com>
>>> Signed-off-by: Dmitry Osipenko <digetx@...il.com>
>>> ---
>>> drivers/memory/tegra/Kconfig | 1 +
>>> drivers/memory/tegra/tegra30-emc.c | 349 +++++++++++++++++++++++++++--
>>> drivers/memory/tegra/tegra30.c | 173 +++++++++++++-
>>> 3 files changed, 501 insertions(+), 22 deletions(-)
>>>
>> [..]> diff --git a/drivers/memory/tegra/tegra30.c
>> b/drivers/memory/tegra/tegra30.c
>>> index d0314f29608d..ea849003014b 100644
>>> --- a/drivers/memory/tegra/tegra30.c
>>> +++ b/drivers/memory/tegra/tegra30.c
>> [..]
>>> +
>>> +static int tegra30_mc_icc_set(struct icc_node *src, struct icc_node
>>> *dst)
>>> +{
>>> + struct tegra_mc *mc = icc_provider_to_tegra_mc(src->provider);
>>> + const struct tegra_mc_client *client = &mc->soc->clients[src->id];
>>> + u64 peak_bandwidth = icc_units_to_bps(src->peak_bw);
>>> +
>>> + /*
>>> + * Skip pre-initialization that is done by icc_node_add(), which
>>> sets
>>> + * bandwidth to maximum for all clients before drivers are loaded.
>>> + *
>>> + * This doesn't make sense for us because we don't have drivers
>>> for all
>>> + * clients and it's okay to keep configuration left from bootloader
>>> + * during boot, at least for today.
>>> + */
>>> + if (src == dst)
>>> + return 0;
>>
>> Nit: The "proper" way to express this should be to implement the
>> .get_bw() callback to return zero as initial average/peak bandwidth.
>> I'm wondering if this will work here?
>>
>> The rest looks good to me!
>
> Hello Georgi,
>
> Returning zeros doesn't allow us to skip the initialization that is done
> by provider->set(node, node) in icc_node_add(). It will reconfigure
> memory latency in accordance to a zero memory bandwidth, which is wrong
> to do.
>
> It actually should be more preferred to preset bandwidth to a maximum
> before all drivers are synced, but this should be done only once we will
> wire up all drivers to use ICC framework. For now it's safer to keep the
> default hardware configuration untouched.
Ok, thanks for clarifying! Is there a way to read this hardware
configuration and convert it to initial bandwidth? That's the
idea of the get_bw() callback actually. I am just curious and
trying to get a better understanding how this works and if it
would be useful for Tegra.
Thanks,
Georgi
Powered by blists - more mailing lists