[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CAD=FV=WF4=4ZgeBE8WYysW1HevynMrPqv6oU4a=hZMQ_z9juiA@mail.gmail.com>
Date: Mon, 4 May 2020 10:54:54 -0700
From: Doug Anderson <dianders@...omium.org>
To: Maulik Shah <mkshah@...eaurora.org>
Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
"Rafael J. Wysocki" <rafael.j.wysocki@...el.com>,
Andy Gross <agross@...nel.org>,
Bjorn Andersson <bjorn.andersson@...aro.org>,
Evan Green <evgreen@...omium.org>,
Matthias Kaehlcke <mka@...omium.org>,
Stephen Boyd <swboyd@...omium.org>,
linux-arm-msm <linux-arm-msm@...r.kernel.org>,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v5 1/5] soc: qcom: rpmh-rsc: Correctly ignore
CPU_CLUSTER_PM notifications
Hi,
On Sun, May 3, 2020 at 10:19 PM Maulik Shah <mkshah@...eaurora.org> wrote:
>
> Hi,
>
> On 4/24/2020 10:16 PM, Douglas Anderson wrote:
> > Our switch statement doesn't have entries for CPU_CLUSTER_PM_ENTER,
> > CPU_CLUSTER_PM_ENTER_FAILED, and CPU_CLUSTER_PM_EXIT and doesn't have
> > a default. This means that we'll try to do a flush in those cases but
> > we won't necessarily be the last CPU down. That's not so ideal since
> > our (lack of) locking assumes we're on the last CPU.
> >
> > Luckily this isn't as big a problem as you'd think since (at least on
> > the SoC I tested) we don't get these notifications except on full
> > system suspend. ...and on full system suspend we get them on the last
> > CPU down. That means that the worst problem we hit is flushing twice.
> > Still, it's good to make it correct.
> >
> > Fixes: 985427f997b6 ("soc: qcom: rpmh: Invoke rpmh_flush() for dirty caches")
> > Reported-by: Stephen Boyd <swboyd@...omium.org>
> > Signed-off-by: Douglas Anderson <dianders@...omium.org>
> > Reviewed-by: Maulik Shah <mkshah@...eaurora.org>
> > Reviewed-by: Stephen Boyd <swboyd@...omium.org>
> > ---
> >
> > Changes in v5:
> > - Corrently => Correctly
> >
> > Changes in v4:
> > - ("...Corrently ignore CPU_CLUSTER_PM notifications") split out for v4.
> >
> > Changes in v3: None
> > Changes in v2: None
> >
> > drivers/soc/qcom/rpmh-rsc.c | 2 ++
> > 1 file changed, 2 insertions(+)
> >
> > diff --git a/drivers/soc/qcom/rpmh-rsc.c b/drivers/soc/qcom/rpmh-rsc.c
> > index a9e15699f55f..3571a99fc839 100644
> > --- a/drivers/soc/qcom/rpmh-rsc.c
> > +++ b/drivers/soc/qcom/rpmh-rsc.c
> > @@ -806,6 +806,8 @@ static int rpmh_rsc_cpu_pm_callback(struct notifier_block *nfb,
> > case CPU_PM_EXIT:
> > cpumask_clear_cpu(smp_processor_id(), &drv->cpus_entered_pm);
> > goto exit;
> > + default:
> > + return NOTIFY_DONE;
>
> I noticed a bug here,
>
> Either need to unlock and return here.
Dang it! Thank you very much for noticing. v6 sent. I removed both
yours and Stephen's "Reviewed-by" tags. Please re-review to make sure
I didn't do anything else stuid.
> + default:
> + ret = NOTIFY_DONE;
> + goto exit;
>
> Or
>
> If you move this patch at the end of series, it should will work fine as is.
> Since in patch 5 of this series, pm_lock is removed, so return
> NOTIFY_DONE; do not any unlock.
Right. It used to be part of the last patch but Stephen requested I
move the bugfixes first so they could land sooner even if we aren't
ready to land the "remove the pm_lock" patch.
> When i pulled in only first two changes in this series i got spinlock
> recursion during suspend-resume.
> Back when i pull in entire series for validation, the issue do not come
> because last patch removes pm_lock.
OK, v6 is sent out.
-Doug
Powered by blists - more mailing lists