lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 30 Nov 2016 14:12:48 +0100
From:   Hannes Frederic Sowa <hannes@...essinduktion.org>
To:     David Miller <davem@...emloft.net>, david.lebrun@...ouvain.be
Cc:     netdev@...r.kernel.org
Subject: Re: [RFC PATCH net-next] ipv6: implement consistent hashing for
 equal-cost multipath routing

On Mon, Nov 28, 2016, at 21:32, David Miller wrote:
> From: David Lebrun <david.lebrun@...ouvain.be>
> Date: Mon, 28 Nov 2016 21:16:19 +0100
> 
> > The advantage of my solution over RFC2992 is lowest possible disruption
> > and equal rebalancing of affected flows. The disadvantage is the lookup
> > complexity of O(log n) vs O(1). Although from a theoretical viewpoint
> > O(1) is obviously better, would O(log n) have an effectively measurable
> > negative impact on scalability ? If we consider 32 next-hops for a route
> > and 100 pseudo-random numbers generated per next-hop, the lookup
> > algorithm would have to perform in the worst case log2 3200 = 11
> > comparisons to select a next-hop for that route.
> 
> When I was working on the routing cache removal in ipv4 I compared
> using a stupid O(1) hash lookup of the FIB entries vs. the O(log n)
> fib_trie stuff actually in use.
> 
> It did make a difference.
> 
> This is a lookup that can be invoked 20 million times per second or
> more.
> 
> Every cycle matters.
> 
> We already have a lot of trouble getting under the cycle budget one
> has for routing at wire speed for very high link rates, please don't
> make it worse.

David, one question: do you remember if you measured with linked lists
at that time or also with arrays. I actually would expect small arrays
that entirely fit into cachelines to be actually faster than our current
approach, which also walks a linked list, probably the best algorithm to
trash cache lines. I ask because I currently prefer this approach more
than having large allocations in the O(1) case because of easier code
and easier management.

Thanks,
Hannes

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ