lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  PHC 
Open Source and information security mailing list archives
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 30 Apr 2018 09:33:25 -0500
From:   Rob Herring <>
To:     Rishabh Bhatnagar <>
        linux-arm-msm <>,,,
        "" <>,
        Trilok Soni <>,
        Kyle Yan <>,,
        Evan Green <>
Subject: Re: [PATCH v5 1/2] dt-bindings: Documentation for qcom, llcc

On Fri, Apr 27, 2018 at 5:57 PM,  <> wrote:
> On 2018-04-27 07:21, Rob Herring wrote:
>> On Mon, Apr 23, 2018 at 04:09:31PM -0700, Rishabh Bhatnagar wrote:
>>> Documentation for last level cache controller device tree bindings,
>>> client bindings usage examples.
>>> Signed-off-by: Channagoud Kadabi <>
>>> Signed-off-by: Rishabh Bhatnagar <>
>>> ---
>>>  .../devicetree/bindings/arm/msm/qcom,llcc.txt      | 60
>>> ++++++++++++++++++++++
>>>  1 file changed, 60 insertions(+)
>>>  create mode 100644
>>> Documentation/devicetree/bindings/arm/msm/qcom,llcc.txt
>> My comments on v4 still apply.
>> Rob
> Hi Rob,
> Reposting our replies to your comments on v4:
> This is partially true, a bunch of SoCs would support this design but
> clients IDs are not expected to change. So Ideally client drivers could
> hard code these IDs.
> However I have other concerns of moving the client Ids in the driver.
> The way the APIs implemented today are as follows:
> #1. Client calls into system cache driver to get cache slice handle
> with the usecase Id as input.
> #2. System cache driver gets the phandle of system cache instance from
> the client device to obtain the private data.
> #3. Based on the usecase Id perform look up in the private data to get
> cache slice handle.
> #4. Return the cache slice handle to client
> If we don't have the connection between client & system cache then the
>  private data needs to declared as static global in the system cache driver,
> that limits us to have just once instance of system cache block.

How many instances do you have?

It is easier to put the data into the kernel and move it to DT later
than vice-versa. I don't think it is a good idea to do a custom
binding here and one that only addresses caches and nothing else in
the interconnect. So either we define an extensible and future-proof
binding or put the data into the kernel for now.


Powered by blists - more mailing lists