lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <47CDA6E4.1050505@ixiacom.com>
Date:	Tue, 04 Mar 2008 21:45:40 +0200
From:	Cosmin Ratiu <cratiu@...acom.com>
To:	netdev@...r.kernel.org
CC:	Octavian Purdila <opurdila@...acom.com>
Subject: inet established hash question

Hello,

I work at Ixia (most of you probably heard of it), we do network testing 
using a custom Linux distribution and some specialized hardware. There 
is one scalability issue we ran into a while ago regarding large number 
of tcp connections and although we solved it by changing the established 
hash function, we'd like your opinion on the issue if you're kind enough.

Basically, the situation is as follows:
There is a client machine and a server machine. Both create 15000 
virtual interfaces, open up a socket for each pair of interfaces and do 
SIP traffic. By profiling I noticed that there is a lot of time spent 
walking the established hash chains with this particular setup. We are 
using an old version of the kernel (2.6.7), which was using the 
following hash function:

/static __inline__ int tcp_hashfn(__u32 laddr, __u16 lport,
                 __u32 faddr, __u16 fport)
{
    int h = (laddr ^ lport) ^ (faddr ^ fport);
    h ^= h >> 16;
    h ^= h >> 8;
    return h & (tcp_ehash_size - 1);
}/

The addresses were distributed like this: client interfaces were 
198.18.0.1/16 with increments of 1 and server interfaces were 
198.18.128.1/16 with increments of 1. As I said, there were 15000 
interfaces. Source and destination ports were 5060 for each connection. 
So in this case, ports don't matter for hashing purposes, and the bits 
from the address pairs used cancel each other, meaning there are no 
differences in the whole lot of pairs, so they all end up in the same 
hash chain.

After investigating things, I noticed the hash function has been changed 
in the recent kernels to
/
static inline unsigned int inet_ehashfn(const __be32 laddr, const __u16 
lport,
                    const __be32 faddr, const __be16 fport)
{
    return jhash_2words((__force __u32) laddr ^ (__force __u32) faddr,
                ((__u32) lport) << 16 | (__force __u32)fport,
                inet_ehash_secret);
}
/
We tested with the new function and it seems that the results are the 
same for this case: bits in address pairs cancel each other out and they 
all end up in the same chain.

So I changed the function yet again to stop xor-ing addresses before 
feeding them to the jenkins hash. I got something like:
/
{
    int h = jhash_3words(laddr, faddr, ((__u32)lport) << 16 | fport, 
tcp_ehash_secret);

    return h & (tcp_ehash_size - 1);
}/

This way, connections get distributed properly in this case and other 
cases we tested so far.

So, thanks for reading through all this. My question is whether this is 
a good thing to do or not, as I am not so good with hash functions, so I 
can't say for sure if we won't run into a collision with a different setup.


Thank you,
Cosmin.

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ