lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <3f7c689b-700a-1d76-505e-76446c62439f@codeaurora.org>
Date:   Thu, 27 Feb 2020 11:02:34 +0530
From:   Maulik Shah <mkshah@...eaurora.org>
To:     Stephen Boyd <swboyd@...omium.org>, bjorn.andersson@...aro.org,
        evgreen@...omium.org, mka@...omium.org
Cc:     linux-kernel@...r.kernel.org, linux-arm-msm@...r.kernel.org,
        agross@...nel.org, dianders@...omium.org, rnayak@...eaurora.org,
        ilina@...eaurora.org, lsrao@...eaurora.org
Subject: Re: [PATCH v7 2/3] soc: qcom: rpmh: Update dirty flag only when data
 changes


On 2/27/2020 4:13 AM, Stephen Boyd wrote:
> Quoting Maulik Shah (2020-02-25 21:27:12)
>> Currently rpmh ctrlr dirty flag is set for all cases regardless
>> of data is really changed or not. Add changes to update it when
>> data is updated to newer values.
>>
>> Also move dirty flag updates to happen from within cache_lock.
>>
>> Signed-off-by: Maulik Shah <mkshah@...eaurora.org>
>> Reviewed-by: Srinivas Rao L <lsrao@...eaurora.org>
> Probably worth adding a Fixes tag here? Doesn't make sense to mark
> something dirty when it isn't changed.
Done. will update in v8.
>> ---
>>   drivers/soc/qcom/rpmh.c | 21 ++++++++++++++++-----
>>   1 file changed, 16 insertions(+), 5 deletions(-)
>>
>> diff --git a/drivers/soc/qcom/rpmh.c b/drivers/soc/qcom/rpmh.c
>> index eb0ded0..83ba4e0 100644
>> --- a/drivers/soc/qcom/rpmh.c
>> +++ b/drivers/soc/qcom/rpmh.c
>> @@ -139,20 +139,27 @@ static struct cache_req *cache_rpm_request(struct rpmh_ctrlr *ctrlr,
>>   existing:
>>          switch (state) {
>>          case RPMH_ACTIVE_ONLY_STATE:
>> -               if (req->sleep_val != UINT_MAX)
>> +               if (req->sleep_val != UINT_MAX) {
>>                          req->wake_val = cmd->data;
>> +                       ctrlr->dirty = true;
>> +               }
>>                  break;
>>          case RPMH_WAKE_ONLY_STATE:
>> -               req->wake_val = cmd->data;
>> +               if (req->wake_val != cmd->data) {
>> +                       req->wake_val = cmd->data;
>> +                       ctrlr->dirty = true;
>> +               }
>>                  break;
>>          case RPMH_SLEEP_STATE:
>> -               req->sleep_val = cmd->data;
>> +               if (req->sleep_val != cmd->data) {
>> +                       req->sleep_val = cmd->data;
>> +                       ctrlr->dirty = true;
>> +               }
>>                  break;
>>          default:
>>                  break;
> Please remove the default case. There are only three states in the enum. The
> compiler will warn if a switch statement doesn't cover all cases and
> we'll know to add something here if another enum value is added in the
> future.
Done.
>>          }
>>   
>> -       ctrlr->dirty = true;
>>   unlock:
>>          spin_unlock_irqrestore(&ctrlr->cache_lock, flags);
>>   
>> @@ -323,6 +331,7 @@ static void invalidate_batch(struct rpmh_ctrlr *ctrlr)
>>          list_for_each_entry_safe(req, tmp, &ctrlr->batch_cache, list)
>>                  kfree(req);
>>          INIT_LIST_HEAD(&ctrlr->batch_cache);
>> +       ctrlr->dirty = true;
>>          spin_unlock_irqrestore(&ctrlr->cache_lock, flags);
>>   }
>>   
>> @@ -456,6 +465,7 @@ static int send_single(struct rpmh_ctrlr *ctrlr, enum rpmh_state state,
>>   int rpmh_flush(struct rpmh_ctrlr *ctrlr)
>>   {
>>          struct cache_req *p;
>> +       unsigned long flags;
>>          int ret;
>>   
>>          if (!ctrlr->dirty) {
>> @@ -488,7 +498,9 @@ int rpmh_flush(struct rpmh_ctrlr *ctrlr)
>>                          return ret;
>>          }
>>   
>> +       spin_lock_irqsave(&ctrlr->cache_lock, flags);
>>          ctrlr->dirty = false;
>> +       spin_unlock_irqrestore(&ctrlr->cache_lock, flags);
> So we take the spinlock to update it here. But we don't hold the
> spinlock to test for !dirty up above. Seems like either rpmh_flush() can
> only be called sequentially, or the lock added here needs to be held
> during the whole flush. Which way is it?

Thanks, i will remove !ctrlr->dirty check within rpmh_flush() as 
currently we invoke it only when caches are dirty.

Last cpu going down can first check dirty flag outside rpmh_flush() and 
decide to invoke it accoringly.

-- 
QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, hosted by The Linux Foundation

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ