lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 9 Jan 2020 10:41:53 +0100
From:   Greg Kroah-Hartman <gregkh@...uxfoundation.org>
To:     David Hildenbrand <david@...hat.com>
Cc:     Michal Hocko <mhocko@...nel.org>,
        "Rafael J. Wysocki" <rafael@...nel.org>,
        linux-kernel@...r.kernel.org,
        Scott Cheloha <cheloha@...ux.vnet.ibm.com>,
        nathanl@...ux.ibm.com, ricklind@...ux.vnet.ibm.com,
        Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: [PATCH v3] drivers/base/memory.c: cache blocks in radix tree to
 accelerate lookup

On Thu, Jan 09, 2020 at 10:31:21AM +0100, David Hildenbrand wrote:
> On 09.01.20 10:19, Michal Hocko wrote:
> > On Thu 09-01-20 09:56:23, Greg KH wrote:
> >> On Thu, Jan 09, 2020 at 09:49:55AM +0100, Michal Hocko wrote:
> >>> On Tue 07-01-20 22:48:04, Michal Hocko wrote:
> >>>> [Cc Andrew]
> >>>>
> >>>> On Tue 17-12-19 13:32:38, Scott Cheloha wrote:
> >>>>> Searching for a particular memory block by id is slow because each block
> >>>>> device is kept in an unsorted linked list on the subsystem bus.
> >>>>
> >>>> Noting that this is O(N^2) would be useful.
> >>>>
> >>>>> Lookup is much faster if we cache the blocks in a radix tree.
> >>>>
> >>>> While this is really easy and straightforward, is there any reason why
> >>>> subsys_find_device_by_id has to use such a slow lookup? I suspect nobody
> >>>> simply needed a more optimized data structure for that purpose yet.
> >>>> Would it be too hard to use radix tree for all lookups rather than
> >>>> adding a shadow copy for memblocks?
> >>>
> >>> Greg, Rafael, this seems to be your domain. Do you have any opinion on
> >>> this?
> >>
> >> No one has cared about the speed of that call as it has never been on
> >> any "fast path" that I know of.  And it should just be O(N), isn't it
> >> just walking the list of devices in order?
> > 
> > Which means that if you have to call it N times then it is O(N^2) and
> > that is the case here because you are adding N memblocks. See
> > memory_dev_init
> >   for each memblock
> >     add_memory_block
> >       init_memory_block
> >         find_memory_block_by_id # checks all existing devices
> >         register_memory
> > 	  device_register # add new device
> >   
> > In this particular case find_memory_block_by_id is called mostly to make
> > sure we are no re-registering something multiple times which shouldn't
> > happen so it sucks to spend a lot of time on that. We might think of
> > removing that for boot time but who knows what kind of surprises we
> > might see from crazy HW setups.
> 
> Oh, and please note (as discussed in v1 or v2 of this patch as well)
> that the lookup is also performed in walk_memory_blocks() for each
> memory block in the range, e.g., via link_mem_sections() on system boot.
> There we have O(N^2) as well.

Ok, again self-inflicted, I suggest you all roll your own logic for this
highly accessed set of things :)

thanks,

greg k-h

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ