[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160602023101.7364.qmail@ns.sciencehorizons.net>
Date: 1 Jun 2016 22:31:01 -0400
From: "George Spelvin" <linux@...encehorizons.net>
To: linux@...encehorizons.net, torvalds@...ux-foundation.org
Cc: bfields@...hat.com, linux-kernel@...r.kernel.org,
peterz@...radead.org
Subject: Re: [PATCH v3 06/10] fs/namei.c: Improve dcache hash function
Linus Torvalds wrote:
> On Mon, May 30, 2016 at 11:10 AM, George Spelvin wrote:
>>
>> I understand, but 64x64-bit multiply on 32-bit is pretty annoyingly
>> expensive. In time, code size, and register pressure which bloats
>> surrounding code.
> Side note, the code seems to work fairly well, but I do worry a bit
> about the three large multiplies in link_path_walk().
>
> There's two in fold_hash(), and one comes from "find_zero()".
I do wonder about the second multiply in fold_hash().
For the 32-bit version, the outer __hash_32() could safely be deleted.
The 32-bit hash gets fed to hash_32() to reduce it to a hash table
index anyway.
(Specifically, it can be deleted from fs/namei.c and moved into
hash_str() and hash_mem() where it's useful in folding the hash value to
less than 32 bits.)
It's the 64-bit version that's an issue. I need to reduce 128 bits of
weakly mixed hash state to 32, and on x86 and PPC, two multiplies seems
like the fastest way. The second one could be done with a 32-bit multiply
instead, but 64-bit has been the same latency as 32 ever since Prescott
and Saltwell (Agner Fog says it's one cycle *faster* in many cases, which
I find hard to believe), so re-using the large immediate is a net win.
I could use two more iterations of HASH_MIX() or something similar,
then just take the x value, but that's 6 cycles. If a multiply is
4 or 5 cycles, that's a net loss.
An issue with the 64-bit version which I hadn't thought through is
false sharing with the length. As the comments say, nobody actually
uses the hash value until after some code that checks for special cases
like . and .. using the length.
On a 32-bit machine, the length and hash are in separate registers (%edx
and %eax) and scoreboarded separately, so it's possible to examine the
length without stalling on the hash.
But on a 64-bit machine, they're merged in %rax, and it's not possible to
extract the length without waiting for the hash. :-(
That puts the hash folding on the critical path, so maybe it needs
more attention.
> It turns out to work fairly well on at least modern big-core x86
> CPU's, because the multiplier is fairly beefy: low latency (3-4 cycles
> in the current ctop) and fully pipelined.
>
> Even atom should be 5 cycles and a multiplication result every two
> cycles for 64-bit results.
>
> Maybe we don't care, because looking around the modern ARM and POWER
> cores do similarly, but I just wanted to point out that that code does
> seem to fairly heavily rely on "everybody has bug and pipelined hw
> multipliers" for performance.
The problem is it's so damn useful as a mixing function. When a multiplier
*is* available, with 3-4 cycle latency, it's hard to beat.
But worrying about that is the reason I left provision for arch-specific
hooks, and I'm already working on the first: the PA-RISC doesn't have
an integer multiplier at all, although the FPU can do 32-bit integer
multiplies.
(So much for my theory that 64-bit OOO CPUs always have grunty
multipliers! That said, the last PA-RISC came out in 2005.)
But it tries to be good at shift-and-add sequences for multiplies by
fixed integers.
Unfortnately, the best 64-bit multiply sequence I've come up with is
13 cycles, which is a mite painful. A few more HASH_MIX rounds looks
attractive in that case.
Powered by blists - more mailing lists