lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <82d27a47-f189-6609-a584-c9ca1b35a76c@gmail.com>
Date:   Thu, 2 Jul 2020 02:36:40 +0300
From:   Dmitry Osipenko <digetx@...il.com>
To:     Georgi Djakov <georgi.djakov@...aro.org>
Cc:     Thierry Reding <thierry.reding@...il.com>,
        Jonathan Hunter <jonathanh@...dia.com>,
        Rob Herring <robh+dt@...nel.org>,
        Michael Turquette <mturquette@...libre.com>,
        Stephen Boyd <sboyd@...nel.org>,
        Peter De Schrijver <pdeschrijver@...dia.com>,
        MyungJoo Ham <myungjoo.ham@...sung.com>,
        Kyungmin Park <kyungmin.park@...sung.com>,
        Chanwoo Choi <cw00.choi@...sung.com>,
        Mikko Perttunen <cyndis@...si.fi>,
        Artur Świgoń <a.swigon@...sung.com>,
        linux-tegra@...r.kernel.org, linux-pm@...r.kernel.org,
        linux-kernel@...r.kernel.org, dri-devel@...ts.freedesktop.org,
        devicetree@...r.kernel.org
Subject: Re: [PATCH v4 28/37] memory: tegra: Register as interconnect provider

01.07.2020 20:12, Georgi Djakov пишет:
> Hi Dmitry,
> 
> Thank you for updating the patches!

Hello, Georgi!

Thank you for the review!

> On 6/9/20 16:13, Dmitry Osipenko wrote:
>> Now memory controller is a memory interconnection provider. This allows us
>> to use interconnect API in order to change memory configuration.
>>
>> Signed-off-by: Dmitry Osipenko <digetx@...il.com>
>> ---
>>  drivers/memory/tegra/Kconfig |   1 +
>>  drivers/memory/tegra/mc.c    | 114 +++++++++++++++++++++++++++++++++++
>>  drivers/memory/tegra/mc.h    |   8 +++
>>  include/soc/tegra/mc.h       |   3 +
>>  4 files changed, 126 insertions(+)
>>
>> diff --git a/drivers/memory/tegra/Kconfig b/drivers/memory/tegra/Kconfig
>> index 5bf75b316a2f..7055fdef2c32 100644
>> --- a/drivers/memory/tegra/Kconfig
>> +++ b/drivers/memory/tegra/Kconfig
>> @@ -3,6 +3,7 @@ config TEGRA_MC
>>  	bool "NVIDIA Tegra Memory Controller support"
>>  	default y
>>  	depends on ARCH_TEGRA
>> +	select INTERCONNECT
>>  	help
>>  	  This driver supports the Memory Controller (MC) hardware found on
>>  	  NVIDIA Tegra SoCs.
>> diff --git a/drivers/memory/tegra/mc.c b/drivers/memory/tegra/mc.c
>> index 772aa021b5f6..7ef7ac9e103e 100644
>> --- a/drivers/memory/tegra/mc.c
>> +++ b/drivers/memory/tegra/mc.c
>> @@ -594,6 +594,118 @@ static __maybe_unused irqreturn_t tegra20_mc_irq(int irq, void *data)
>>  	return IRQ_HANDLED;
>>  }
>>  
>> +static int tegra_mc_icc_set(struct icc_node *src, struct icc_node *dst)
>> +{
>> +	return 0;
>> +}
>> +
>> +static int tegra_mc_icc_aggregate(struct icc_node *node,
>> +				  u32 tag, u32 avg_bw, u32 peak_bw,
>> +				  u32 *agg_avg, u32 *agg_peak)
>> +{
>> +	*agg_avg = min((u64)avg_bw + (*agg_avg), (u64)U32_MAX);
>> +	*agg_peak = max(*agg_peak, peak_bw);
>> +
>> +	return 0;
>> +}
>> +
>> +/*
>> + * Memory Controller (MC) has few Memory Clients that are issuing memory
>> + * bandwidth allocation requests to the MC interconnect provider. The MC
>> + * provider aggregates the requests and then sends the aggregated request
>> + * up to the External Memory Controller (EMC) interconnect provider which
>> + * re-configures hardware interface to External Memory (EMEM) in accordance
>> + * to the required bandwidth. Each MC interconnect node represents an
>> + * individual Memory Client.
>> + *
>> + * Memory interconnect topology:
>> + *
>> + *               +----+
>> + * +--------+    |    |
>> + * | TEXSRD +--->+    |
>> + * +--------+    |    |
>> + *               |    |    +-----+    +------+
>> + *    ...        | MC +--->+ EMC +--->+ EMEM |
>> + *               |    |    +-----+    +------+
>> + * +--------+    |    |
>> + * | DISP.. +--->+    |
>> + * +--------+    |    |
>> + *               +----+
>> + */
>> +static int tegra_mc_interconnect_setup(struct tegra_mc *mc)
>> +{
>> +	struct icc_onecell_data *data;
>> +	struct icc_node *node;
>> +	unsigned int num_nodes;
>> +	unsigned int i;
>> +	int err;
>> +
>> +	/* older device-trees don't have interconnect properties */
>> +	if (!of_find_property(mc->dev->of_node, "#interconnect-cells", NULL))
>> +		return 0;
>> +
>> +	num_nodes = mc->soc->num_clients;
>> +
>> +	data = devm_kzalloc(mc->dev, struct_size(data, nodes, num_nodes),
>> +			    GFP_KERNEL);
>> +	if (!data)
>> +		return -ENOMEM;
>> +
>> +	mc->provider.dev = mc->dev;
>> +	mc->provider.set = tegra_mc_icc_set;
> 
> Hmm, maybe the core should not require a set() implementation and we can
> just make it optional instead. Then the dummy function would not be needed.

Eventually this dummy function might become populated with a memory
latency allowness programming. I could add a comment into that function
in the next version, saying that it's to-be-done for now.

>> +	mc->provider.data = data;
>> +	mc->provider.xlate = of_icc_xlate_onecell;
>> +	mc->provider.aggregate = tegra_mc_icc_aggregate;
>> +
>> +	err = icc_provider_add(&mc->provider);
>> +	if (err)
>> +		goto err_msg;
> 
> Nit: I am planning to re-organize some of the existing drivers to call
> icc_provider_add() after the topology is populated. Could you please move
> this after the nodes are created and linked.

Are you planning to remove the provider's list-head initialization from
the icc_provider_add() [1] and move it to the individual provider
drivers, correct?

[1]
https://elixir.bootlin.com/linux/v5.8-rc3/source/drivers/interconnect/core.c#L977

If yes, then it should be easy to move the icc_provider_add() in the
case of this driver. Otherwise, please give some more clarification.

Please notice that the same "node" variable is used for both creation
and linking of the nodes to the provider here, making code nice and
clean. And thus, the provider's list-head should be initialized before
the linking.

...
> The rest looks good to me!

Thanks!

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ