lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 16 Mar 2018 10:00:44 -0700
From:   Stephen Boyd <swboyd@...omium.org>
To:     Lina Iyer <ilina@...eaurora.org>
Cc:     andy.gross@...aro.org, david.brown@...aro.org,
        linux-arm-msm@...r.kernel.org, linux-soc@...r.kernel.org,
        rnayak@...eaurora.org, bjorn.andersson@...aro.org,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH v3 09/10] drivers: qcom: rpmh: add support for batch RPMH request

Quoting Lina Iyer (2018-03-08 14:55:40)
> On Thu, Mar 08 2018 at 14:59 -0700, Stephen Boyd wrote:
> >Quoting Lina Iyer (2018-03-02 08:43:16)
> >> @@ -343,6 +346,146 @@ int rpmh_write(struct rpmh_client *rc, enum rpmh_state state,
> >>  }
> >>  EXPORT_SYMBOL(rpmh_write);
> >>
> >> +static int cache_batch(struct rpmh_client *rc,
> >> +                     struct rpmh_request **rpm_msg, int count)
> >> +{
> >> +       struct rpmh_ctrlr *rpm = rc->ctrlr;
> >> +       unsigned long flags;
> >> +       int ret = 0;
> >> +       int index = 0;
> >> +       int i;
> >> +
> >> +       spin_lock_irqsave(&rpm->lock, flags);
> >> +       while (rpm->batch_cache[index])
> >
> >If batch_cache is full.
> >And if adjacent memory has bits set....
> >
> >This loop can go forever?
> >
> >Please add bounds.
> >
> How so? The if() below will ensure that it will not exceed bounds.

Right, the if below will make sure we don't run off the end, but unless
rpm->batch_cache has a sentinel at the end we can't guarantee we won't
run off the end of the array and into some other portion of memory that
also has a bit set in a word. And then we may read into some unallocated
space. Or maybe I missed something.

> 
> >> +               index++;
> >> +       if (index + count >=  2 * RPMH_MAX_REQ_IN_BATCH) {
> >> +               ret = -ENOMEM;
> >> +               goto fail;
> >> +       }
> >> +
> >> +       for (i = 0; i < count; i++)
> >> +               rpm->batch_cache[index + i] = rpm_msg[i];
> >> +fail:
> >> +       spin_unlock_irqrestore(&rpm->lock, flags);
> >> +
> >> +       return ret;
> >> +}
> >> +
> 
> >> + * @state: Active/sleep set
> >> + * @cmd: The payload data
> >> + * @n: The array of count of elements in each batch, 0 terminated.
> >> + *
> >> + * Write a request to the mailbox controller without caching. If the request
> >> + * state is ACTIVE, then the requests are treated as completion request
> >> + * and sent to the controller immediately. The function waits until all the
> >> + * commands are complete. If the request was to SLEEP or WAKE_ONLY, then the
> >> + * request is sent as fire-n-forget and no ack is expected.
> >> + *
> >> + * May sleep. Do not call from atomic contexts for ACTIVE_ONLY requests.
> >> + */
> >> +int rpmh_write_batch(struct rpmh_client *rc, enum rpmh_state state,
> >> +                   struct tcs_cmd *cmd, int *n)
> >
> >I'm lost why n is a pointer, and cmd is not a double pointer if n stays
> >as a pointer. Are there clients calling this API with a contiguous chunk
> >of commands but then they want to break that chunk up into many
> >requests?
> >
> That is correct. Clients want to provide a big buffer that this API will
> break it up into requests specified in *n.

Is that for bus scaling?

> >> +                       return PTR_ERR(rpm_msg[i]);
> >> +               }
> >> +               cmd += n[i];
> >> +       }
> >> +
> >> +       /* Send if Active and wait for the whole set to complete */
> >> +       if (state == RPMH_ACTIVE_ONLY_STATE) {
> >> +               might_sleep();
> >> +               atomic_set(&wait_count, count);
> >
> >Aha, here's the wait counter.
> >
> :)
> I am removing it from the earlier patch and introducing the wait_count
> here. Not bad as I though.

Thanks!

> 
> >> +               for (i = 0; i < count; i++) {
> >> +                       rpm_msg[i]->completion = &compl;
> >> +                       rpm_msg[i]->wait_count = &wait_count;
> >
> >But then we just assign the same count and completion to each rpm_msg?
> >Why? Can't we just put the completion on the final one and have the
> >completion called there?
> >
> The order of the responses is not gauranteed to be sequential and in the
> order it was sent. So we have to do this.

OK! That is sad.

> 
> >> +                       /* Bypass caching and write to mailbox directly */
> >> +                       ret = rpmh_rsc_send_data(rc->ctrlr->drv,
> >> +                                               &rpm_msg[i]->msg);
> >> +                       if (ret < 0) {
> >> +                               pr_err(
> >> +                               "Error(%d) sending RPMH message addr=0x%x\n",
> >> +                               ret, rpm_msg[i]->msg.payload[0].addr);
> >> +                               break;
> >> +                       }
> >> +               }
> >> +               /* For those unsent requests, spoof tx_done */
> >
> >Why? Comments shouldn't say what the code is doing, but explain why
> >things don't make sense.
> >
> Will remove..
> 

Oh, I was hoping for more details, not less.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ