[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <001201d3f19a$0130a860$0391f920$@codeaurora.org>
Date: Tue, 22 May 2018 09:56:19 +0300
From: <ilialin@...eaurora.org>
To: "'Sudeep Holla'" <sudeep.holla@....com>, <mturquette@...libre.com>,
<sboyd@...nel.org>, <robh@...nel.org>, <mark.rutland@....com>,
<viresh.kumar@...aro.org>, <nm@...com>, <lgirdwood@...il.com>,
<broonie@...nel.org>, <andy.gross@...aro.org>,
<david.brown@...aro.org>, <catalin.marinas@....com>,
<will.deacon@....com>, <rjw@...ysocki.net>,
<linux-clk@...r.kernel.org>
Cc: <devicetree@...r.kernel.org>, <linux-kernel@...r.kernel.org>,
<linux-pm@...r.kernel.org>, <linux-arm-msm@...r.kernel.org>,
<linux-soc@...r.kernel.org>,
<linux-arm-kernel@...ts.infradead.org>, <rnayak@...eaurora.org>,
<amit.kucheria@...aro.org>, <nicolas.dechesne@...aro.org>,
<celster@...eaurora.org>, <tfinkel@...eaurora.org>
Subject: RE: [PATCH] cpufreq: Add Kryo CPU scaling driver
> -----Original Message-----
> From: Sudeep Holla <sudeep.holla@....com>
> Sent: Monday, May 21, 2018 16:05
> To: ilialin@...eaurora.org; mturquette@...libre.com; sboyd@...nel.org;
> robh@...nel.org; mark.rutland@....com; viresh.kumar@...aro.org;
> nm@...com; lgirdwood@...il.com; broonie@...nel.org;
> andy.gross@...aro.org; david.brown@...aro.org; catalin.marinas@....com;
> will.deacon@....com; rjw@...ysocki.net; linux-clk@...r.kernel.org
> Cc: Sudeep Holla <sudeep.holla@....com>; devicetree@...r.kernel.org;
> linux-kernel@...r.kernel.org; linux-pm@...r.kernel.org; linux-arm-
> msm@...r.kernel.org; linux-soc@...r.kernel.org; linux-arm-
> kernel@...ts.infradead.org; rnayak@...eaurora.org;
> amit.kucheria@...aro.org; nicolas.dechesne@...aro.org;
> celster@...eaurora.org; tfinkel@...eaurora.org
> Subject: Re: [PATCH] cpufreq: Add Kryo CPU scaling driver
>
>
>
> On 21/05/18 13:57, ilialin@...eaurora.org wrote:
> >
> [...]
>
> >>> +#include <linux/cpu.h>
> >>> +#include <linux/err.h>
> >>> +#include <linux/init.h>
> >>> +#include <linux/kernel.h>
> >>> +#include <linux/module.h>
> >>> +#include <linux/nvmem-consumer.h>
> >>> +#include <linux/of.h>
> >>> +#include <linux/platform_device.h>
> >>> +#include <linux/pm_opp.h>
> >>> +#include <linux/slab.h>
> >>> +#include <linux/soc/qcom/smem.h>
> >>> +
> >>> +#define MSM_ID_SMEM 137
> >>> +#define SILVER_LEAD 0
> >>> +#define GOLD_LEAD 2
> >>> +
> >>
> >> So I gather form other emails, that these are physical cpu number(not
> >> even unique identifier like MPIDR). Will this work on parts or
> >> platforms that need to boot in GOLD LEAD cpus.
> >
> > The driver is for Kryo CPU, which (and AFAIK all multicore MSMs)
> > always boots on the CPU0.
>
>
> That may be true and I am not that bothered about it. But assuming physical
> ordering from the logical cpu number is *incorrect* and will break if kernel
> decides to change the allocation algorithm. Kernel provides no guarantee on
> that, so you need to depend on some physical ID or may be DT to achieve
> what your want. But the current code as it stands is wrong.
Got your point. In fact CPUs are numbered 0-3 and ordered into 2 clusters in the DT:
cpus {
#address-cells = <2>;
#size-cells = <0>;
CPU0: cpu@0 {
...
reg = <0x0 0x0>;
...
};
CPU1: cpu@1 {
...
reg = <0x0 0x1>;
...
};
CPU2: cpu@100 {
...
reg = <0x0 0x100>;
...
};
CPU3: cpu@101 {
...
reg = <0x0 0x101>;
...
};
cpu-map {
cluster0 {
core0 {
cpu = <&CPU0>;
};
core1 {
cpu = <&CPU1>;
};
};
cluster1 {
core0 {
cpu = <&CPU2>;
};
core1 {
cpu = <&CPU3>;
};
};
};
};
As far, as I understand, they are probed in the same order. However, to be certain that the physical CPU is the one I intend to configure, I have to fetch the device structure pointer for the cpu-map -> clusterX -> core0 -> cpu path. Could you suggest a kernel API to do that?
>
> --
> Regards,
> Sudeep
Powered by blists - more mailing lists