lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 2 Jul 2020 15:36:07 +0300
From:   Georgi Djakov <georgi.djakov@...aro.org>
To:     Dmitry Osipenko <digetx@...il.com>
Cc:     Thierry Reding <thierry.reding@...il.com>,
        Jonathan Hunter <jonathanh@...dia.com>,
        Rob Herring <robh+dt@...nel.org>,
        Michael Turquette <mturquette@...libre.com>,
        Stephen Boyd <sboyd@...nel.org>,
        Peter De Schrijver <pdeschrijver@...dia.com>,
        MyungJoo Ham <myungjoo.ham@...sung.com>,
        Kyungmin Park <kyungmin.park@...sung.com>,
        Chanwoo Choi <cw00.choi@...sung.com>,
        Mikko Perttunen <cyndis@...si.fi>,
        Artur Świgoń <a.swigon@...sung.com>,
        linux-tegra@...r.kernel.org, linux-pm@...r.kernel.org,
        linux-kernel@...r.kernel.org, dri-devel@...ts.freedesktop.org,
        devicetree@...r.kernel.org
Subject: Re: [PATCH v4 28/37] memory: tegra: Register as interconnect provider

Hi Dmitry,

On 7/2/20 02:36, Dmitry Osipenko wrote:
> 01.07.2020 20:12, Georgi Djakov пишет:
>> Hi Dmitry,
>>
>> Thank you for updating the patches!
> 
> Hello, Georgi!
> 
> Thank you for the review!
> 
>> On 6/9/20 16:13, Dmitry Osipenko wrote:
>>> Now memory controller is a memory interconnection provider. This allows us
>>> to use interconnect API in order to change memory configuration.
>>>
>>> Signed-off-by: Dmitry Osipenko <digetx@...il.com>
>>> ---
>>>  drivers/memory/tegra/Kconfig |   1 +
>>>  drivers/memory/tegra/mc.c    | 114 +++++++++++++++++++++++++++++++++++
>>>  drivers/memory/tegra/mc.h    |   8 +++
>>>  include/soc/tegra/mc.h       |   3 +
>>>  4 files changed, 126 insertions(+)
>>>
>>> diff --git a/drivers/memory/tegra/Kconfig b/drivers/memory/tegra/Kconfig
>>> index 5bf75b316a2f..7055fdef2c32 100644
>>> --- a/drivers/memory/tegra/Kconfig
>>> +++ b/drivers/memory/tegra/Kconfig
>>> @@ -3,6 +3,7 @@ config TEGRA_MC
>>>  	bool "NVIDIA Tegra Memory Controller support"
>>>  	default y
>>>  	depends on ARCH_TEGRA
>>> +	select INTERCONNECT
>>>  	help
>>>  	  This driver supports the Memory Controller (MC) hardware found on
>>>  	  NVIDIA Tegra SoCs.
>>> diff --git a/drivers/memory/tegra/mc.c b/drivers/memory/tegra/mc.c
>>> index 772aa021b5f6..7ef7ac9e103e 100644
>>> --- a/drivers/memory/tegra/mc.c
>>> +++ b/drivers/memory/tegra/mc.c
>>> @@ -594,6 +594,118 @@ static __maybe_unused irqreturn_t tegra20_mc_irq(int irq, void *data)
>>>  	return IRQ_HANDLED;
>>>  }
>>>  
>>> +static int tegra_mc_icc_set(struct icc_node *src, struct icc_node *dst)
>>> +{
>>> +	return 0;
>>> +}
>>> +
>>> +static int tegra_mc_icc_aggregate(struct icc_node *node,
>>> +				  u32 tag, u32 avg_bw, u32 peak_bw,
>>> +				  u32 *agg_avg, u32 *agg_peak)
>>> +{
>>> +	*agg_avg = min((u64)avg_bw + (*agg_avg), (u64)U32_MAX);
>>> +	*agg_peak = max(*agg_peak, peak_bw);
>>> +
>>> +	return 0;
>>> +}
>>> +
>>> +/*
>>> + * Memory Controller (MC) has few Memory Clients that are issuing memory
>>> + * bandwidth allocation requests to the MC interconnect provider. The MC
>>> + * provider aggregates the requests and then sends the aggregated request
>>> + * up to the External Memory Controller (EMC) interconnect provider which
>>> + * re-configures hardware interface to External Memory (EMEM) in accordance
>>> + * to the required bandwidth. Each MC interconnect node represents an
>>> + * individual Memory Client.
>>> + *
>>> + * Memory interconnect topology:
>>> + *
>>> + *               +----+
>>> + * +--------+    |    |
>>> + * | TEXSRD +--->+    |
>>> + * +--------+    |    |
>>> + *               |    |    +-----+    +------+
>>> + *    ...        | MC +--->+ EMC +--->+ EMEM |
>>> + *               |    |    +-----+    +------+
>>> + * +--------+    |    |
>>> + * | DISP.. +--->+    |
>>> + * +--------+    |    |
>>> + *               +----+
>>> + */
>>> +static int tegra_mc_interconnect_setup(struct tegra_mc *mc)
>>> +{
>>> +	struct icc_onecell_data *data;
>>> +	struct icc_node *node;
>>> +	unsigned int num_nodes;
>>> +	unsigned int i;
>>> +	int err;
>>> +
>>> +	/* older device-trees don't have interconnect properties */
>>> +	if (!of_find_property(mc->dev->of_node, "#interconnect-cells", NULL))
>>> +		return 0;
>>> +
>>> +	num_nodes = mc->soc->num_clients;
>>> +
>>> +	data = devm_kzalloc(mc->dev, struct_size(data, nodes, num_nodes),
>>> +			    GFP_KERNEL);
>>> +	if (!data)
>>> +		return -ENOMEM;
>>> +
>>> +	mc->provider.dev = mc->dev;
>>> +	mc->provider.set = tegra_mc_icc_set;
>>
>> Hmm, maybe the core should not require a set() implementation and we can
>> just make it optional instead. Then the dummy function would not be needed.
> 
> Eventually this dummy function might become populated with a memory
> latency allowness programming. I could add a comment into that function
> in the next version, saying that it's to-be-done for now.

Ah ok! Sounds good, thanks for clarifying!

>>> +	mc->provider.data = data;
>>> +	mc->provider.xlate = of_icc_xlate_onecell;
>>> +	mc->provider.aggregate = tegra_mc_icc_aggregate;
>>> +
>>> +	err = icc_provider_add(&mc->provider);
>>> +	if (err)
>>> +		goto err_msg;
>>
>> Nit: I am planning to re-organize some of the existing drivers to call
>> icc_provider_add() after the topology is populated. Could you please move
>> this after the nodes are created and linked.
> 
> Are you planning to remove the provider's list-head initialization from
> the icc_provider_add() [1] and move it to the individual provider
> drivers, correct?

Yes, that would be the first step, but i need to post some patches first,
so let's keep it as-is for now. Sorry for the confusion.

Thanks,
Georgi

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ