[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <92bf14b7-b7ae-3060-312e-74f57c1f9a63@codeaurora.org>
Date: Thu, 5 Mar 2020 17:00:26 +0530
From: Maulik Shah <mkshah@...eaurora.org>
To: Doug Anderson <dianders@...omium.org>
Cc: Stephen Boyd <swboyd@...omium.org>,
Matthias Kaehlcke <mka@...omium.org>,
Evan Green <evgreen@...omium.org>,
Bjorn Andersson <bjorn.andersson@...aro.org>,
LKML <linux-kernel@...r.kernel.org>,
linux-arm-msm <linux-arm-msm@...r.kernel.org>,
Andy Gross <agross@...nel.org>,
Rajendra Nayak <rnayak@...eaurora.org>,
Lina Iyer <ilina@...eaurora.org>, lsrao@...eaurora.org
Subject: Re: [PATCH v10 3/3] soc: qcom: rpmh: Invoke rpmh_flush() for dirty
caches
On 3/5/2020 4:52 AM, Doug Anderson wrote:
> Hi,
>
> On Tue, Mar 3, 2020 at 4:27 AM Maulik Shah <mkshah@...eaurora.org> wrote:
>> Add changes to invoke rpmh flush() from within cache_lock when the data
>> in cache is dirty.
>>
>> This is done only if OSI is not supported in PSCI. If OSI is supported
>> rpmh_flush can get invoked when the last cpu going to power collapse
>> deepest low power mode.
>>
>> Also remove "depends on COMPILE_TEST" for Kconfig option QCOM_RPMH so the
>> driver is only compiled for arm64 which supports psci_has_osi_support()
>> API.
>>
>> Signed-off-by: Maulik Shah <mkshah@...eaurora.org>
>> Reviewed-by: Srinivas Rao L <lsrao@...eaurora.org>
>> ---
>> drivers/soc/qcom/Kconfig | 2 +-
>> drivers/soc/qcom/rpmh.c | 37 ++++++++++++++++++++++---------------
>> 2 files changed, 23 insertions(+), 16 deletions(-)
>>
>> diff --git a/drivers/soc/qcom/Kconfig b/drivers/soc/qcom/Kconfig
>> index d0a73e7..2e581bc 100644
>> --- a/drivers/soc/qcom/Kconfig
>> +++ b/drivers/soc/qcom/Kconfig
>> @@ -105,7 +105,7 @@ config QCOM_RMTFS_MEM
>>
>> config QCOM_RPMH
>> bool "Qualcomm RPM-Hardened (RPMH) Communication"
>> - depends on ARCH_QCOM && ARM64 || COMPILE_TEST
>> + depends on ARCH_QCOM && ARM64
>> help
>> Support for communication with the hardened-RPM blocks in
>> Qualcomm Technologies Inc (QTI) SoCs. RPMH communication uses an
>> diff --git a/drivers/soc/qcom/rpmh.c b/drivers/soc/qcom/rpmh.c
>> index f28afe4..dafb0da 100644
>> --- a/drivers/soc/qcom/rpmh.c
>> +++ b/drivers/soc/qcom/rpmh.c
>> @@ -12,6 +12,7 @@
>> #include <linux/module.h>
>> #include <linux/of.h>
>> #include <linux/platform_device.h>
>> +#include <linux/psci.h>
>> #include <linux/slab.h>
>> #include <linux/spinlock.h>
>> #include <linux/types.h>
>> @@ -158,6 +159,13 @@ static struct cache_req *cache_rpm_request(struct rpmh_ctrlr *ctrlr,
>> }
>>
>> unlock:
>> + if (ctrlr->dirty && !psci_has_osi_support()) {
>> + if (rpmh_flush(ctrlr)) {
>> + spin_unlock_irqrestore(&ctrlr->cache_lock, flags);
>> + return ERR_PTR(-EINVAL);
>> + }
>> + }
>> +
>> spin_unlock_irqrestore(&ctrlr->cache_lock, flags);
>>
>> return req;
>> @@ -285,26 +293,35 @@ int rpmh_write(const struct device *dev, enum rpmh_state state,
>> }
>> EXPORT_SYMBOL(rpmh_write);
>>
>> -static void cache_batch(struct rpmh_ctrlr *ctrlr, struct batch_cache_req *req)
>> +static int cache_batch(struct rpmh_ctrlr *ctrlr, struct batch_cache_req *req)
>> {
>> unsigned long flags;
>>
>> spin_lock_irqsave(&ctrlr->cache_lock, flags);
>> +
>> list_add_tail(&req->list, &ctrlr->batch_cache);
>> ctrlr->dirty = true;
>> +
>> + if (!psci_has_osi_support()) {
>> + if (rpmh_flush(ctrlr)) {
> The whole API here is a bit unfortunate. From what I can tell,
> callers of this code almost always call rpmh_write_batch() in
> triplicate, AKA:
>
> rpmh_write_batch(active, ...)
> rpmh_write_batch(wake, ...)
> rpmh_write_batch(sleep, ...)
>
> ...that's going to end up writing the whole sleep/wake sets twice
> every single time, right? I know you talked about trying to keep
> separate dirty bits for sleep/wake and maybe that would help, but it
> might not be so easy due to the comparison of "sleep_val" and
> "wake_val" in is_req_valid().
>
> I guess we can keep the inefficiency for now and see how much it hits
> us, but it feels ugly.
>
>
>> + spin_unlock_irqrestore(&ctrlr->cache_lock, flags);
>> + return -EINVAL;
> nit: why not add "int ret = 0" to the top of the function, then here:
>
> if (rpmh_flush(ctrl))
> ret = -EINVAL;
>
> ...then at the end "return ret". It avoids the 2nd copy of the unlock?
Done.
>
> Also: Why throw away the return value of rpmh_flush and replace it
> with -EINVAL? Trying to avoid -EBUSY? ...oh, should you handle
> -EBUSY? AKA:
>
> if (!psci_has_osi_support()) {
> do {
> ret = rpmh_flush(ctrl);
> } while (ret == -EBUSY);
> }
Done, the return value from rpmh_flush() can be -EAGAIN, not -EBUSY.
i will update the comment accordingly and will include below change as well in next series.
https://patchwork.kernel.org/patch/11364067/
this should address for caller to not handle -EAGAIN.
>
>
>> + }
>> + }
>> +
>> spin_unlock_irqrestore(&ctrlr->cache_lock, flags);
>> +
>> + return 0;
>> }
>>
>> static int flush_batch(struct rpmh_ctrlr *ctrlr)
>> {
>> struct batch_cache_req *req;
>> const struct rpmh_request *rpm_msg;
>> - unsigned long flags;
>> int ret = 0;
>> int i;
>>
>> /* Send Sleep/Wake requests to the controller, expect no response */
>> - spin_lock_irqsave(&ctrlr->cache_lock, flags);
>> list_for_each_entry(req, &ctrlr->batch_cache, list) {
>> for (i = 0; i < req->count; i++) {
>> rpm_msg = req->rpm_msgs + i;
>> @@ -314,7 +331,6 @@ static int flush_batch(struct rpmh_ctrlr *ctrlr)
>> break;
>> }
>> }
>> - spin_unlock_irqrestore(&ctrlr->cache_lock, flags);
>>
>> return ret;
>> }
>> @@ -386,10 +402,8 @@ int rpmh_write_batch(const struct device *dev, enum rpmh_state state,
>> cmd += n[i];
>> }
>>
>> - if (state != RPMH_ACTIVE_ONLY_STATE) {
>> - cache_batch(ctrlr, req);
>> - return 0;
>> - }
>> + if (state != RPMH_ACTIVE_ONLY_STATE)
>> + return cache_batch(ctrlr, req);
>>
>> for (i = 0; i < count; i++) {
>> struct completion *compl = &compls[i];
>> @@ -455,9 +469,6 @@ static int send_single(struct rpmh_ctrlr *ctrlr, enum rpmh_state state,
>> * Return: -EBUSY if the controller is busy, probably waiting on a response
>> * to a RPMH request sent earlier.
>> *
>> - * This function is always called from the sleep code from the last CPU
>> - * that is powering down the entire system. Since no other RPMH API would be
>> - * executing at this time, it is safe to run lockless.
> nit: you've now got an extra "blank" (just has a "*" on it) line at
> the end of your comment block.
Done.
> nit: in v9, Evan suggested "We should probably replace that with a
> comment indicating that we assume ctrlr->cache_lock is already held".
> Maybe you could do that?
yes i left it for below reason since we still can call it from sleep code.
i will mention same in v11.
Thanks,
Maulik
>
> Also: presumably you _will_ still be called by the sleep code from the
> last CPU on systems with OSI. Is that true? If that's not true then
> you should change your function to static. If that is true, then your
> comment should be something like "this function will either be called
> from sleep code on the last CPU (thus no spinlock needed) or with the
> spinlock already held".
>
>
> -Doug
--
QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, hosted by The Linux Foundation
Powered by blists - more mailing lists