[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160502103101.20878.qmail@ns.horizon.com>
Date: 2 May 2016 06:31:01 -0400
From: "George Spelvin" <linux@...izon.com>
To: linux-kernel@...r.kernel.org, tglx@...utronix.de,
torvalds@...ux-foundation.org
Cc: eric.dumazet@...il.com, linux@...izon.com, riel@...hat.com
Subject: [RFC PATCH 4/2] namei: Improve hash mixing if CONFIG_DCACHE_WORD_ACCESS
The hash mixing between adding the next 64 bits of name
was just a bit weak.
Replaced with a still very fast but slightly more effective
mixing function.
Signed-off-by: George Spelvin <linux@...izon.com>
---
As long as I was looking at all sorts of hashing in the kernel, I noticed
this. I'm not sure if this is still too expansive and will slow down
the loop.
fs/namei.c | 33 ++++++++++++++++++++++++++-------
1 file changed, 26 insertions(+), 7 deletions(-)
diff --git a/fs/namei.c b/fs/namei.c
index 1d9ca2d5..e2bff05d 100644
--- a/fs/namei.c
+++ b/fs/namei.c
@@ -1794,30 +1794,49 @@ static inline unsigned int fold_hash(unsigned long hash)
return hash_64(hash, 32);
}
+/*
+ * This is George Marsaglia's XORSHIFT generator.
+ * It implements a maximum-period LFSR in only a few
+ * instructions. It also has the property (required
+ * by hash_name()) that mix_hash(0) = 0.
+ */
+static inline unsigned long mix_hash(unsigned long hash)
+{
+ hash ^= hash << 13;
+ hash ^= hash >> 7;
+ hash ^= hash << 17;
+ return hash;
+}
+
#else /* 32-bit case */
#define fold_hash(x) (x)
+static inline unsigned long mix_hash(unsigned long hash)
+{
+ hash ^= hash << 13;
+ hash ^= hash >> 17;
+ hash ^= hash << 5;
+ return hash;
+}
+
#endif
unsigned int full_name_hash(const unsigned char *name, unsigned int len)
{
- unsigned long a, mask;
- unsigned long hash = 0;
+ unsigned long a, hash = 0;
for (;;) {
a = load_unaligned_zeropad(name);
if (len < sizeof(unsigned long))
break;
- hash += a;
- hash *= 9;
+ hash = mix_hash(hash + a);
name += sizeof(unsigned long);
len -= sizeof(unsigned long);
if (!len)
goto done;
}
- mask = bytemask_from_count(len);
- hash += mask & a;
+ hash += a & bytemask_from_count(len);
done:
return fold_hash(hash);
}
@@ -1835,7 +1854,7 @@ static inline u64 hash_name(const char *name)
hash = a = 0;
len = -sizeof(unsigned long);
do {
- hash = (hash + a) * 9;
+ hash = mix_hash(hash + a);
len += sizeof(unsigned long);
a = load_unaligned_zeropad(name+len);
b = a ^ REPEAT_BYTE('/');
--
2.8.1
Powered by blists - more mailing lists