[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CAD=FV=UKAVSsk=4NtqgsdR3MVTtTQiJVHGaLnu+WLt5mWCZXtQ@mail.gmail.com>
Date: Wed, 22 Apr 2020 14:55:44 -0700
From: Doug Anderson <dianders@...omium.org>
To: Stephen Boyd <swboyd@...omium.org>
Cc: Andy Gross <agross@...nel.org>,
Bjorn Andersson <bjorn.andersson@...aro.org>,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
"Rafael J. Wysocki" <rafael.j.wysocki@...el.com>,
Maulik Shah <mkshah@...eaurora.org>,
Matthias Kaehlcke <mka@...omium.org>,
Evan Green <evgreen@...omium.org>,
linux-arm-msm <linux-arm-msm@...r.kernel.org>,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v3 3/3] soc: qcom: rpmh-rsc: Remove the pm_lock
Hi,
On Wed, Apr 22, 2020 at 3:33 AM Stephen Boyd <swboyd@...omium.org> wrote:
>
> Quoting Douglas Anderson (2020-04-21 10:29:08)
> > case CPU_PM_ENTER_FAILED:
> > case CPU_PM_EXIT:
> > - cpumask_clear_cpu(smp_processor_id(), &drv->cpus_entered_pm);
> > - goto exit;
> > + atomic_dec(&drv->cpus_in_pm);
> > + return NOTIFY_OK;
> > + default:
> > + return NOTIFY_DONE;
>
> Can this be split out and merged now? It's a bugfix for code that is in
> -next.
Sure. I guess I had visions that the removal of the pm_lock would
make it into -next soon-ish too...
Interestingly, when testing the split-out patch I found that it wasn't
nearly as important as it appears. Specifically we don't appear to
get cluster notifications except for a final one at the end of full
system suspend. Grepping for cpu_cluster_pm_enter() the only calls
(other than the one from cpu_pm_suspend()) I see are in "arch/arm",
not arm64.
I've also split out my own bugfix about not getting notified about our
own failure.
v4 posted now...
-Doug
Powered by blists - more mailing lists