[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <2133387.cqCEZmzLhk@kreacher>
Date: Thu, 05 Nov 2020 15:47:11 +0100
From: "Rafael J. Wysocki" <rjw@...ysocki.net>
To: Ionela Voinescu <ionela.voinescu@....com>
Cc: "Rafael J. Wysocki" <rafael@...nel.org>,
Viresh Kumar <viresh.kumar@...aro.org>,
Len Brown <lenb@...nel.org>,
Sudeep Holla <sudeep.holla@....com>,
Morten Rasmussen <morten.rasmussen@....com>,
Jeremy Linton <jeremy.linton@....com>,
Linux PM <linux-pm@...r.kernel.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 8/8] acpi: fix NONE coordination for domain mapping failure
On Thursday, November 5, 2020 3:02:02 PM CET Ionela Voinescu wrote:
> Hi Rafael,
>
> On Thursday 05 Nov 2020 at 14:05:55 (+0100), Rafael J. Wysocki wrote:
> > On Thu, Nov 5, 2020 at 1:57 PM Ionela Voinescu <ionela.voinescu@....com> wrote:
> > >
> > > For errors parsing the _PSD domains, a separate domain is returned for
> > > each CPU in the failed _PSD domain with no coordination (as per previous
> > > comment). But contrary to the intention, the code was setting
> > > CPUFREQ_SHARED_TYPE_ALL as coordination type.
> > >
> > > Change shared_type to CPUFREQ_SHARED_TYPE_NONE in case of errors parsing
> > > the domain information. The function still return the error and the caller
> > > is free to bail out the domain initialisation altogether in that case.
> > >
> > > Given that both functions return domains with a single CPU, this change
> > > does not affect the functionality, but clarifies the intention.
> >
> > Is this related to any other patches in the series?
> >
>
> It does not depend on any of the other patches. I first noticed this in
> acpi_get_psd_map() which is solely used by cppc_cpufreq.c, but looking
> some more into it showed processor_perflib.c's
> acpi_processor_preregister_performance() had the same inconsistency.
>
> I can submit this separately, if that works better.
No need this time, but in general sending unrelated changes separately is less
confusing.
Thanks!
Powered by blists - more mailing lists