lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160430003206.25509.qmail@ns.horizon.com>
Date:	29 Apr 2016 20:32:06 -0400
From:	"George Spelvin" <linux@...izon.com>
To:	linux@...izon.com, torvalds@...ux-foundation.org
Cc:	linux-kernel@...r.kernel.org, tglx@...utronix.de
Subject: Re: [patch 2/7] lib/hashmod: Add modulo based hash mechanism

> At least for my tests, even that seems to actually be a total
> non-issue. Yes, odd values *might* be better, but as mentioned in my
> crossing email, it doesn't actually seem to matter for any case the
> kernel cares about, since we tend to want to hash down to 10-20 bits
> of data, so the least significant bit (particularly for the 64-bit
> case) just doesn't matter all that much.

Odd is important.  If the multiplier is even, the msbit of the input
doesn't affect the hash result at all.  x and (x + 0x80000000) hash to
the same value, always.  That just seems like a crappy hash function.

> Yeah. gcc will actually do the clever stuff for the 32-bit case, afaik.

It's not as clever as it could be; it just does the same Booth
recoding thing, a simple series of shifts with add/subtract.

Here's the ARM code that GCC produces (9 instructions, all dependent):

mult1:
	add	r3, r0, r0, lsl #1
	rsb	r3, r0, r3, lsl #5
	add	r3, r3, r3, lsl #4
	rsb	r3, r3, r3, lsl #5
	add	r3, r0, r3, lsl #5
	add	r3, r0, r3, lsl #1
	add	r3, r0, r3, lsl #3
	add	r3, r0, r3, lsl #3
	rsb	r0, r0, r3, lsl #3
	bx	lr

versus the clever code (6 instructions, #4 and #5 could dual-issue):
mult2:
	add	r3, r0, r0, lsl #19
	add	r2, r3, r0, lsl #9
	add	r0, r2, r0, lsl #23
	add	r3, r3, r2, lsl #8
	rsb	r0, r0, r0, lsl #6
	add	r0, r0, r3, lsl #3
	bx	lr

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ