lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <8d19958d-7334-ca4e-d7ba-f5919a56b279@codeaurora.org>
Date:   Fri, 27 Mar 2020 17:34:26 +0530
From:   Maulik Shah <mkshah@...eaurora.org>
To:     Doug Anderson <dianders@...omium.org>
Cc:     Stephen Boyd <swboyd@...omium.org>,
        Evan Green <evgreen@...omium.org>,
        Bjorn Andersson <bjorn.andersson@...aro.org>,
        LKML <linux-kernel@...r.kernel.org>,
        linux-arm-msm <linux-arm-msm@...r.kernel.org>,
        Andy Gross <agross@...nel.org>,
        Matthias Kaehlcke <mka@...omium.org>,
        Rajendra Nayak <rnayak@...eaurora.org>,
        Lina Iyer <ilina@...eaurora.org>, lsrao@...eaurora.org
Subject: Re: [PATCH v14 6/6] soc: qcom: rpmh-rsc: Allow using free WAKE TCS
 for active request

Hi,

On 3/27/2020 3:16 AM, Doug Anderson wrote:
> Hi,
>
> On Thu, Mar 26, 2020 at 10:38 AM Maulik Shah <mkshah@...eaurora.org> wrote:
>> When there are more than one WAKE TCS available and there is no dedicated
>> ACTIVE TCS available, invalidating all WAKE TCSes and waiting for current
>> transfer to complete in first WAKE TCS blocks using another free WAKE TCS
>> to complete current request.
>>
>> Remove rpmh_rsc_invalidate() to happen from tcs_write() when WAKE TCSes
>> is re-purposed to be used for Active mode. Clear only currently used
>> WAKE TCS's register configuration.
>>
>> Mark the caches as dirty so next time when rpmh_flush() is invoked it
>> can invalidate and program cached sleep and wake sets again.
>>
>> Fixes: 2de4b8d33eab (drivers: qcom: rpmh-rsc: allow active requests from wake TCS)
>> Signed-off-by: Maulik Shah <mkshah@...eaurora.org>
>> ---
>>  drivers/soc/qcom/rpmh-rsc.c | 29 +++++++++++++++++++----------
>>  1 file changed, 19 insertions(+), 10 deletions(-)
>>
>> diff --git a/drivers/soc/qcom/rpmh-rsc.c b/drivers/soc/qcom/rpmh-rsc.c
>> index 8fa70b4..c0513af 100644
>> --- a/drivers/soc/qcom/rpmh-rsc.c
>> +++ b/drivers/soc/qcom/rpmh-rsc.c
>> @@ -154,8 +154,9 @@ int rpmh_rsc_invalidate(struct rsc_drv *drv)
>>  static struct tcs_group *get_tcs_for_msg(struct rsc_drv *drv,
>>                                          const struct tcs_request *msg)
>>  {
>> -       int type, ret;
>> +       int type;
>>         struct tcs_group *tcs;
>> +       unsigned long flags;
>>
>>         switch (msg->state) {
>>         case RPMH_ACTIVE_ONLY_STATE:
>> @@ -175,18 +176,18 @@ static struct tcs_group *get_tcs_for_msg(struct rsc_drv *drv,
>>          * If we are making an active request on a RSC that does not have a
>>          * dedicated TCS for active state use, then re-purpose a wake TCS to
>>          * send active votes.
>> -        * NOTE: The driver must be aware that this RSC does not have a
>> -        * dedicated AMC, and therefore would invalidate the sleep and wake
>> -        * TCSes before making an active state request.
>> +        *
>> +        * NOTE: Mark caches as dirty here since existing data in wake TCS will
>> +        * be lost. rpmh_flush() will processed for dirty caches to restore
>> +        * data.
>>          */
>>         tcs = get_tcs_of_type(drv, type);
>>         if (msg->state == RPMH_ACTIVE_ONLY_STATE && !tcs->num_tcs) {
>>                 tcs = get_tcs_of_type(drv, WAKE_TCS);
>> -               if (tcs->num_tcs) {
>> -                       ret = rpmh_rsc_invalidate(drv);
>> -                       if (ret)
>> -                               return ERR_PTR(ret);
>> -               }
>> +
>> +               spin_lock_irqsave(&drv->client.cache_lock, flags);
>> +               drv->client.dirty = true;
>> +               spin_unlock_irqrestore(&drv->client.cache_lock, flags);
> This seems like a huge abstraction violation.  

Agree that cache_lock and dirty flag are used in rpmh.c

I will address this to either notify rpmh.c to mark it dirty or think of other solution.

> Why can't rpmh_write()
> / rpmh_write_async() / rpmh_write_batch() just always unconditionally
> mark the cache dirty?  Are there really lots of cases when those calls
> are made and they do nothing?

At rpmh.c, it doesn't know that rpmh-rsc.c worked on borrowed TCS to finish the request.

We should not blindly mark caches dirty everytime.

>
>
> Other than that this patch seems sane to me and addresses one of the
> comments I had in:
>
> https://lore.kernel.org/r/CAD=FV=XmBQb8yfx14T-tMQ68F-h=3UHog744b3X3JZViu15+4g@mail.gmail.com
>
> ...interestingly after your patch I guess now I guess tcs_invalidate()
> no longer needs spinlocks since it's only ever called from PM code on
> the last CPU.  ...if you agree, I can always do it in my cleanup
> series.  See:
>
> https://lore.kernel.org/r/CAD=FV=Xp1o68HnC2-hMnffDDsi+jjgc9pNrdNuypjQZbS5K4nQ@mail.gmail.com
>
> -Doug

There are other RSCs which use same driver, so lets keep spinlock.

I still didn't get chance to validate your patch (i will have update sometime next week), just to update I have never seen any issue internally

using spin_lock even on nosmp case, that might require it to change to _irq_save/restore variant.

Thanks,
Maulik

-- 
QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, hosted by The Linux Foundation

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ