[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CA+55aFxLCppymSe8PfqFJjeCLcY1X4TkZYKp1qEeBpOo4voACA@mail.gmail.com>
Date: Sun, 1 May 2016 09:51:52 -0700
From: Linus Torvalds <torvalds@...ux-foundation.org>
To: George Spelvin <linux@...izon.com>,
David Woodhouse <David.Woodhouse@...el.com>,
Chris Mason <chris.mason@...ionio.com>
Cc: Thomas Gleixner <tglx@...utronix.de>,
Eric Dumazet <eric.dumazet@...il.com>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Rik van Riel <riel@...hat.com>
Subject: Re: [patch 2/7] lib/hashmod: Add modulo based hash mechanism
On Sun, May 1, 2016 at 2:43 AM, George Spelvin <linux@...izon.com> wrote:
>
> Anyway, my recommendation (I'll write the patches if you like) is:
>
> * Modify the multiplicative constants to be
> #define COLDEN_RATIO_32 0x61C88647
> #define COLDEN_RATIO_64 0x61C8864680B583EB
Interestingly, looking at where hash_64() is used, I notice the btrfs
raid56 code. And the comment there in rio_bucket():
* we shift down quite a bit. We're using byte
* addressing, and most of the lower bits are zeros.
* This tends to upset hash_64, and it consistently
* returns just one or two different values.
*
* shifting off the lower bits fixes things.
so people had actually noticed the breakage before.
Adding DavidW and Chris Mason explicitly to the cc, to perhaps have
them verify that fixing the hash_64 constant would make that btrfs
hack be unnecessary too..
Chris? Do you have a test-case? Do things end up being ok if you
change that COLDEN_RATIO_64 value and remove the ">>16"?
> * Change the prototype of hash_64 to return u32.
Fair enough. That simplifies things for 32-bit. Not that there are a
_lot_ of 32-bit cases, and most of the callers seem to just assign the
value directly to an int or just use it as an index, so it's probably
not really ever going to change anything, but it might make it easier
to write a simpler 32-bit code that uses two explicitly 32-bit fields
and mixes them.
> * Create a separate 32-bit implementation of hash_64() for the
> BITS_PER_LONG < 64 case. This should not be Wang's or anything
> similar because 64-bit shifts are slow and awkward on 32-bit
> machines.
> One option is something like __jhash_final(), but I think
> it will suffice to do:
>
> static __always_inline u32 hash_64(u64 val, unsigned int bits)
> {
> u32 hash = (u32)(val >> 32) * GOLDEN_RATIO_32 + (u32)val;
> hash *= GOLDEN_RATIO_32;
> return hash >> (32 - bits);
> }
> (S-o-b on that if you want it, of course.)
Agreed. However, we need to make very certain that nobody wants a
value bigger than 32 bits. It all looks fine: the name hash folding
does want exactly 32 bits, but that code is only enabled for 64-bit
anyway, and it would be ok.
But it might be worth double-checking that nobody uses hash_64() for
anything else odd. My quick grep didn't show anything, but it was
quick.
> * Eliminate the !CONFIG_ARCH_HAS_FAST_MULTIPLIER code path. Once we've
> got rid of the 32-bit processors, we're left with 64-bit processors,
> and they *all* have hardware multipliers. Even the MIPS R4000 (first
> 64-bit processor ever) and Alpha 21064 had one.
>
> (Admittedly, the R4000 was 20 cycles, which is slower than Wang's hash,
> but 18 of them could overlap other instructions.)
>
> The worst multiply support is SPARCv9, which has 64x64-bit
> multiply with a 64-bit result, but no widening multiply.
Ack.
> * If you feel ambitious, add a 32-bit CONFIG_ARCH_HAS_SLOW_MULTIPLIER
> exception path.
Let's make that a separate worry, and just fix hash_64() first.
In particular, that means "let's not touch COLDEN_RATIO_32 yet". I
suspect that when we *do* change that value, we do want the
non-multiplying version you had.
If you wrote out the patch for the hash_64 case as you outlined above,
I will definitely apply it. The current hash_64 is clearly broken, and
we now have two real life examples (ie futex problems and the btrfs
code both).
Linus
Powered by blists - more mailing lists