[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <158769593201.135303.16055600803132525490@swboyd.mtv.corp.google.com>
Date: Thu, 23 Apr 2020 19:38:52 -0700
From: Stephen Boyd <swboyd@...omium.org>
To: Andy Gross <agross@...nel.org>,
Bjorn Andersson <bjorn.andersson@...aro.org>,
Douglas Anderson <dianders@...omium.org>,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
rafael.j.wysocki@...el.com
Cc: mka@...omium.org, mkshah@...eaurora.org, evgreen@...omium.org,
Douglas Anderson <dianders@...omium.org>,
linux-arm-msm@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v4 1/5] soc: qcom: rpmh-rsc: Corrently ignore CPU_CLUSTER_PM notifications
Quoting Douglas Anderson (2020-04-22 14:54:59)
> Our switch statement doesn't have entries for CPU_CLUSTER_PM_ENTER,
> CPU_CLUSTER_PM_ENTER_FAILED, and CPU_CLUSTER_PM_EXIT and doesn't have
> a default. This means that we'll try to do a flush in those cases but
> we won't necessarily be the last CPU down. That's not so ideal since
> our (lack of) locking assumes we're on the last CPU.
>
> Luckily this isn't as big a problem as you'd think since (at least on
> the SoC I tested) we don't get these notifications except on full
> system suspend. ...and on full system suspend we get them on the last
> CPU down. That means that the worst problem we hit is flushing twice.
> Still, it's good to make it correct.
>
> Fixes: 985427f997b6 ("soc: qcom: rpmh: Invoke rpmh_flush() for dirty caches")
> Reported-by: Stephen Boyd <swboyd@...omium.org>
> Signed-off-by: Douglas Anderson <dianders@...omium.org>
> ---
Reviewed-by: Stephen Boyd <swboyd@...omium.org>
Powered by blists - more mailing lists