[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <0d04d2c8-8f87-ecc7-9bd6-633d84b60e8b@nvidia.com>
Date: Mon, 13 Jul 2020 19:59:28 +0530
From: Sumit Gupta <sumitg@...dia.com>
To: Viresh Kumar <viresh.kumar@...aro.org>
CC: <rjw@...ysocki.net>, <catalin.marinas@....com>, <will@...nel.org>,
<thierry.reding@...il.com>, <robh+dt@...nel.org>,
<devicetree@...r.kernel.org>, <jonathanh@...dia.com>,
<talho@...dia.com>, <linux-pm@...r.kernel.org>,
<linux-tegra@...r.kernel.org>,
<linux-arm-kernel@...ts.infradead.org>,
<linux-kernel@...r.kernel.org>, <bbasu@...dia.com>,
<mperttunen@...dia.com>, Sumit Gupta <sumitg@...dia.com>,
<mirq-linux@...e.qmqm.pl>
Subject: Re: [TEGRA194_CPUFREQ PATCH v4 3/4] cpufreq: Add Tegra194 cpufreq
driver
>
> On 26-06-20, 21:13, Sumit Gupta wrote:
>> +static int tegra194_cpufreq_probe(struct platform_device *pdev)
>> +{
>> + struct tegra194_cpufreq_data *data;
>> + struct tegra_bpmp *bpmp;
>> + int err, i;
>> +
>> + data = devm_kzalloc(&pdev->dev, sizeof(*data), GFP_KERNEL);
>> + if (!data)
>> + return -ENOMEM;
>> +
>> + data->num_clusters = MAX_CLUSTERS;
>> + data->tables = devm_kcalloc(&pdev->dev, data->num_clusters,
>> + sizeof(*data->tables), GFP_KERNEL);
>> + if (!data->tables)
>> + return -ENOMEM;
>> +
>> + platform_set_drvdata(pdev, data);
>> +
>> + bpmp = tegra_bpmp_get(&pdev->dev);
>> + if (IS_ERR(bpmp))
>> + return PTR_ERR(bpmp);
>> +
>> + read_counters_wq = alloc_workqueue("read_counters_wq", __WQ_LEGACY, 1);
>> + if (!read_counters_wq) {
>> + dev_err(&pdev->dev, "fail to create_workqueue\n");
>> + err = -EINVAL;
>> + goto put_bpmp;
>
> This will call destroy_workqueue() eventually and it will crash your
> kernel.
>
> Apart from this, this stuff looks okay. Don't resend the patch just
> yet (and if required, send only this patch using --in-reply-to flag
> for git send email). Lets wait for an Ack from Rob for the first two
> patches.
>
Sorry for the delayed response as i was on PTO.
Thank you for the feedback.
Have posted a v5 based on v4 patch set.
>> + }
>> +
>
> --
> viresh
>
Powered by blists - more mailing lists