[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <9120b876-d7bb-7e74-f1e4-0ff6f2c6c939@codeaurora.org>
Date: Fri, 28 Feb 2020 16:42:32 +0530
From: Maulik Shah <mkshah@...eaurora.org>
To: Evan Green <evgreen@...omium.org>
Cc: Stephen Boyd <swboyd@...omium.org>,
Matthias Kaehlcke <mka@...omium.org>,
Bjorn Andersson <bjorn.andersson@...aro.org>,
LKML <linux-kernel@...r.kernel.org>,
linux-arm-msm <linux-arm-msm@...r.kernel.org>,
Andy Gross <agross@...nel.org>,
Doug Anderson <dianders@...omium.org>,
Rajendra Nayak <rnayak@...eaurora.org>,
Lina Iyer <ilina@...eaurora.org>, lsrao@...eaurora.org
Subject: Re: [PATCH v8 2/3] soc: qcom: rpmh: Update dirty flag only when data
changes
On 2/27/2020 11:48 PM, Evan Green wrote:
> On Thu, Feb 27, 2020 at 12:57 AM Maulik Shah <mkshah@...eaurora.org> wrote:
>> Currently rpmh ctrlr dirty flag is set for all cases regardless of data
>> is really changed or not. Add changes to update dirty flag when data is
>> changed to newer values.
>>
>> Also move dirty flag updates to happen from within cache_lock and remove
>> unnecessary INIT_LIST_HEAD() call and a default case from switch.
>>
>> Fixes: 600513dfeef3 ("drivers: qcom: rpmh: cache sleep/wake state requests")
>> Signed-off-by: Maulik Shah <mkshah@...eaurora.org>
>> Reviewed-by: Srinivas Rao L <lsrao@...eaurora.org>
>> ---
>> drivers/soc/qcom/rpmh.c | 29 ++++++++++++++++-------------
>> 1 file changed, 16 insertions(+), 13 deletions(-)
>>
>> diff --git a/drivers/soc/qcom/rpmh.c b/drivers/soc/qcom/rpmh.c
>> index eb0ded0..3f5d9eb 100644
>> --- a/drivers/soc/qcom/rpmh.c
>> +++ b/drivers/soc/qcom/rpmh.c
>> @@ -133,26 +133,30 @@ static struct cache_req *cache_rpm_request(struct rpmh_ctrlr *ctrlr,
>>
>> req->addr = cmd->addr;
>> req->sleep_val = req->wake_val = UINT_MAX;
>> - INIT_LIST_HEAD(&req->list);
> Thanks!
>
>> list_add_tail(&req->list, &ctrlr->cache);
>>
>> existing:
>> switch (state) {
>> case RPMH_ACTIVE_ONLY_STATE:
>> - if (req->sleep_val != UINT_MAX)
>> + if (req->sleep_val != UINT_MAX) {
>> req->wake_val = cmd->data;
>> + ctrlr->dirty = true;
>> + }
>> break;
>> case RPMH_WAKE_ONLY_STATE:
>> - req->wake_val = cmd->data;
>> + if (req->wake_val != cmd->data) {
>> + req->wake_val = cmd->data;
>> + ctrlr->dirty = true;
>> + }
>> break;
>> case RPMH_SLEEP_STATE:
>> - req->sleep_val = cmd->data;
>> - break;
>> - default:
>> + if (req->sleep_val != cmd->data) {
>> + req->sleep_val = cmd->data;
>> + ctrlr->dirty = true;
>> + }
>> break;
>> }
>>
>> - ctrlr->dirty = true;
>> unlock:
>> spin_unlock_irqrestore(&ctrlr->cache_lock, flags);
>>
>> @@ -287,6 +291,7 @@ static void cache_batch(struct rpmh_ctrlr *ctrlr, struct batch_cache_req *req)
>>
>> spin_lock_irqsave(&ctrlr->cache_lock, flags);
>> list_add_tail(&req->list, &ctrlr->batch_cache);
>> + ctrlr->dirty = true;
>> spin_unlock_irqrestore(&ctrlr->cache_lock, flags);
>> }
>>
>> @@ -323,6 +328,7 @@ static void invalidate_batch(struct rpmh_ctrlr *ctrlr)
>> list_for_each_entry_safe(req, tmp, &ctrlr->batch_cache, list)
>> kfree(req);
>> INIT_LIST_HEAD(&ctrlr->batch_cache);
>> + ctrlr->dirty = true;
>> spin_unlock_irqrestore(&ctrlr->cache_lock, flags);
>> }
>>
>> @@ -456,13 +462,9 @@ static int send_single(struct rpmh_ctrlr *ctrlr, enum rpmh_state state,
>> int rpmh_flush(struct rpmh_ctrlr *ctrlr)
>> {
>> struct cache_req *p;
>> + unsigned long flags;
>> int ret;
>>
>> - if (!ctrlr->dirty) {
>> - pr_debug("Skipping flush, TCS has latest data.\n");
>> - return 0;
>> - }
>> -
>> /* First flush the cached batch requests */
>> ret = flush_batch(ctrlr);
>> if (ret)
>> @@ -488,7 +490,9 @@ int rpmh_flush(struct rpmh_ctrlr *ctrlr)
>> return ret;
>> }
>>
>> + spin_lock_irqsave(&ctrlr->cache_lock, flags);
>> ctrlr->dirty = false;
>> + spin_unlock_irqrestore(&ctrlr->cache_lock, flags);
> You're acquiring a lock around an operation that's already inherently
> atomic, which is not right. If the comment earlier in this function is
> still correct that "Nobody else should be calling this function other
> than system PM, hence we can run without locks", then you can simply
> remove this hunk and the part moving ->dirty = true into
> invalidate_batch.
>
> However, if rpmh_flush() can now be called in a scenario where
> pre-emption is enabled or multiple cores are alive, then ctrlr->cache
> is no longer adequately protected. You'd need to add a lock
> acquire/release around the list iteration above, and fix up the
> comment.
> -Evan
Hi Evan,
Right, rpmh_flush() now can be called from any cpu. i will remove
comments from above.
part for rpmh_flush(), flush_batch() and ctrlr->dirty update was
already covered within cache lock, however its needed to protect
entire rpmh_flush() in cache_lock now.
updates in v9.
Thanks,
Maulik
--
QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, hosted by The Linux Foundation
Powered by blists - more mailing lists