lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CA+55aFx24n-W4-wTtrfbt9PNvVd7n+SvThnO6OQ74uW4yNrGxw@mail.gmail.com>
Date:	Tue, 7 Feb 2012 17:50:59 -0800
From:	Linus Torvalds <torvalds@...ux-foundation.org>
To:	Konstantin Khlebnikov <khlebnikov@...nvz.org>
Cc:	"linux-mm@...ck.org" <linux-mm@...ck.org>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Hugh Dickins <hughd@...gle.com>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 0/4] radix-tree: iterating general cleanup

On Tue, Feb 7, 2012 at 5:30 PM, Konstantin Khlebnikov
<khlebnikov@...nvz.org> wrote:
>
> If do not count comments here actually is negative line count change.

Ok, fair enough.

> And if drop (almost) unused radix_tree_gang_lookup_tag_slot() and
> radix_tree_gang_lookup_slot() total bloat-o-meter score becomes negative
> too.

Good.

> There also some simple bit-hacks: find-next-bit instead of dumb loops in
> tagged-lookup.
>
> Here some benchmark results: there is radix-tree with 1024 slots, I fill and
> tag every <step> slot,
> and run lookup for all slots with radix_tree_gang_lookup() and
> radix_tree_gang_lookup_tag() in the loop.
> old/new rows -- nsec per iteration over whole tree.
>
> tagged-lookup
> step    1       2       3       4       5       6       7       8       9       10      11      12      13      14      15      16
> old     7035    5248    4742    4308    4217    4133    4030    3920    4038    3933    3914    3796    3851    3755    3819    3582
> new     3578    2617    1899    1426    1220    1058    936     822     845     749     695     679     648     575     591     509
>
> so, new tagged-lookup always faster, especially for sparse trees.

Do you have any benchmarks when it's actually used by higher levels,
though? I guess that will involve find_get_pages(), and we don't have
all that any of them, but it would be lovely to see some real load
(even if it is limited to one of the filesystems that uses this)
numbers too..

> New normal lookup works faster for dense trees, on sparse trees it slower.

I think that should be the common case, so that may be fine. Again, it
would be nice to see numbers that are for something else than just the
lookup - an actual use of it in some real context.

Anyway, the patches themselves looked fine to me, modulo the fact that
I wasn't all that happy with the new __find_next_bit, and I think it's
better to not expose it in a generic header file. But I would really
like to see more "real" numbers for the series

Thanks,

                   Linus
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ