[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20210708090808.GA21260@in.ibm.com>
Date: Thu, 8 Jul 2021 14:38:08 +0530
From: Gautham R Shenoy <ego@...ux.vnet.ibm.com>
To: "Pratik R. Sampat" <psampat@...ux.ibm.com>
Cc: mpe@...erman.id.au, rjw@...ysocki.net, linux-pm@...r.kernel.org,
linuxppc-dev@...ts.ozlabs.org, linux-kernel@...r.kernel.org,
pratik.r.sampat@...il.com
Subject: Re: [PATCH] cpufreq:powernv: Fix init_chip_info initialization in
numa=off
Hello Pratik,
On Tue, Jun 15, 2021 at 10:39:49AM +0530, Pratik R. Sampat wrote:
> In the numa=off kernel command-line configuration init_chip_info() loops
> around the number of chips and attempts to copy the cpumask of that node
> which is NULL for all iterations after the first chip.
Thanks for taking a look into this. Indeed there is an issue here
because the code here assumes that node_mask as a proxy for the
chip_mask. This assumption breaks when run with numa=off, since there will only be a
single node, but multiple chips.
>
> Hence adding a check to bail out after the first initialization if there
> is only one node.
>
> Fixes: 053819e0bf84 ("cpufreq: powernv: Handle throttling due to Pmax capping at chip level")
> Signed-off-by: Pratik R. Sampat <psampat@...ux.ibm.com>
> Reported-by: Shirisha Ganta <shirishaganta1@....com>
> ---
> drivers/cpufreq/powernv-cpufreq.c | 2 ++
> 1 file changed, 2 insertions(+)
>
> diff --git a/drivers/cpufreq/powernv-cpufreq.c b/drivers/cpufreq/powernv-cpufreq.c
> index e439b43c19eb..663f9c4b5e3a 100644
> --- a/drivers/cpufreq/powernv-cpufreq.c
> +++ b/drivers/cpufreq/powernv-cpufreq.c
> @@ -1078,6 +1078,8 @@ static int init_chip_info(void)
> INIT_WORK(&chips[i].throttle, powernv_cpufreq_work_fn);
> for_each_cpu(cpu, &chips[i].mask)
> per_cpu(chip_info, cpu) = &chips[i];
> + if (num_possible_nodes() == 1)
> + break;
With this we will only initialize the chip[0].throttle work function,
while for the rest of the chips chip[i].throttle will be
uninitialized. While we may be running in the numa=off mode, the fact
remains that those other chips do exist and they may experiencing
throttling, during which they will try to schedule work for chip[i] in
order to take corrective action, which will fail.
Hence a more correct approach may be to maintain a chip[i] mask
independent of the node mask.
> }
>
> free_and_return:
> --
> 2.30.2
>
Powered by blists - more mailing lists