lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 16 Feb 2018 14:20:22 -0800
From:   Frank Rowand <frowand.list@...il.com>
To:     Chintan Pandya <cpandya@...eaurora.org>,
        Rob Herring <robh+dt@...nel.org>
Cc:     devicetree@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v3] of: cache phandle nodes to reduce cost of
 of_find_node_by_phandle()

On 02/16/18 01:04, Chintan Pandya wrote:
> 
> 
> On 2/15/2018 6:22 AM, frowand.list@...il.com wrote:
>> From: Frank Rowand <frank.rowand@...y.com>
>>
>> Create a cache of the nodes that contain a phandle property.  Use this
>> cache to find the node for a given phandle value instead of scanning
>> the devicetree to find the node.  If the phandle value is not found
>> in the cache, of_find_node_by_phandle() will fall back to the tree
>> scan algorithm.
>>

< snip >

>> diff --git a/drivers/of/base.c b/drivers/of/base.c
>> index ad28de96e13f..ab545dfa9173 100644
>> --- a/drivers/of/base.c
>> +++ b/drivers/of/base.c
>> @@ -91,10 +91,69 @@ int __weak of_node_to_nid(struct device_node *np)
>>   }
>>   #endif
>>   +static struct device_node **phandle_cache;
>> +static u32 phandle_cache_mask;
>> +
>> +/*
>> + * Assumptions behind phandle_cache implementation:
>> + *   - phandle property values are in a contiguous range of 1..n
>> + *
>> + * If the assumptions do not hold, then
>> + *   - the phandle lookup overhead reduction provided by the cache
>> + *     will likely be less
>> + */
>> +static void of_populate_phandle_cache(void)
>> +{
>> +    unsigned long flags;
>> +    u32 cache_entries;
>> +    struct device_node *np;
>> +    u32 phandles = 0;
>> +
>> +    raw_spin_lock_irqsave(&devtree_lock, flags);
>> +
>> +    kfree(phandle_cache);
> 
> I couldn't understood this. Everything else looks good to me.

I will be adding a call to of_populate_phandle_cache() from the
devicetree overlay code.  I put the kfree here so that the previous
cache memory is freed when a new cache is created.

Adding the call from the overlay code is not done in this
series because I have a patch series modifying overlays and
I do not want to create a conflict or ordering between that
series and that patch.  The lack of the call from overlay
code means that overlay code will gain some of the overhead
reduction from this patch, but possibly not the entire reduction.


> 
>> +    phandle_cache = NULL;
>> +
>> +    for_each_of_allnodes(np)

< snip >

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ