lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAL_JsqLj+hAtdCnhozYojzZukT1_dLLbZ8a0iTWH7zfn6k=3Sw@mail.gmail.com>
Date:   Fri, 26 Jan 2018 09:21:00 -0600
From:   Rob Herring <robh+dt@...nel.org>
To:     Chintan Pandya <cpandya@...eaurora.org>
Cc:     Frank Rowand <frowand.list@...il.com>,
        "open list:OPEN FIRMWARE AND FLATTENED DEVICE TREE BINDINGS" 
        <devicetree@...r.kernel.org>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        linux-arm-msm <linux-arm-msm@...r.kernel.org>
Subject: Re: [PATCH] of: use hash based search in of_find_node_by_phandle

On Fri, Jan 26, 2018 at 1:22 AM, Chintan Pandya <cpandya@...eaurora.org> wrote:
> On 1/25/2018 8:20 PM, Rob Herring wrote:
>>
>> On Thu, Jan 25, 2018 at 4:14 AM, Chintan Pandya <cpandya@...eaurora.org>
>> wrote:
>>>

[...]

>> I'd guess that there's really only a few phandle lookups that occur
>> over and over.
>
> On my system, there are ~6.7k calls of this API during boot.

And after boot it will be near 0 yet we carry the memory usage forever.

>> The clock controller, interrupt controller, etc. What
>> if you just had a simple array of previously found nodes for a cache
>> and of_find_node_by_phandle can check that array first. Probably 8-16
>> entries would be enough.
>
> I clearly see repeat calling with same phandle. But I have few hundreds of
> nodes.
> I see hashing as generic optimization which applies equally good to all
> sized DT.
> Using ~4KB more size to save 400 ms is a good trade-off, I believe.

But if you can use 200 bytes and save 350 ms, that would be a better
trade off IMO. But we don't know because we have no data.

Rob

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ