lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20200328180711.GC5859@SDF.ORG>
Date:   Sat, 28 Mar 2020 18:07:11 +0000
From:   George Spelvin <lkml@....ORG>
To:     Stephen Hemminger <stephen@...workplumber.org>
Cc:     linux-kernel@...r.kernel.org,
        Hannes Frederic Sowa <hannes@...essinduktion.org>,
        lkml@....org
Subject: Re: [RFC PATCH v1 09/50] <linux/random.h> prandom_u32_max() for
 power-of-2 ranges

On Sat, Mar 28, 2020 at 10:32:29AM -0700, Stephen Hemminger wrote:
> On Sat, 16 Mar 2019 02:32:04 -0400
> George Spelvin <lkml@....org> wrote:
> 
>> +static inline u32 prandom_u32_max(u32 range)
>>  {
>> -	return (u32)(((u64) prandom_u32() * ep_ro) >> 32);
>> +	/*
>> +	 * If the range is a compile-time constant power of 2, then use
>> +	 * a simple shift.  This is mathematically equivalent to the
>> +	 * multiplication, but GCC 8.3 doesn't optimize that perfectly.
>> +	 *
>> +	 * We could do an AND with a mask, but
>> +	 * 1) The shift is the same speed on a decent CPU,
>> +	 * 2) It's generally smaller code (smaller immediate), and
>> +	 * 3) Many PRNGs have trouble with their low-order bits;
>> +	 *    using the msbits is generaly preferred.
>> +	 */
>> +	if (__builtin_constant_p(range) && (range & (range - 1)) == 0)
>> +		return prandom_u32() / (u32)(0x100000000 / range);
>> +	else
>> +		return reciprocal_scale(prandom_u32(), range);

> The optimization is good, but I don't think that the compiler
> is able to propogate the constant property into the function.
> Did you actually check the generated code?

Yes, I checked repeatedly during development.  I just rechecked the
exact code (it's been a while), and verified that

unsigned foo(void)
{
	return prandom_u32_max(256);
}

compiles to
foo:
.LFB1:
	.cfi_startproc
	subq	$8, %rsp
	.cfi_def_cfa_offset 16
	call	prandom_u32@PLT
	shrl	$24, %eax
	addq	$8, %rsp
	.cfi_def_cfa_offset 8
	ret
	.cfi_endproc
.LFE1:
	.size	foo, .-foo

But you prompted me to check a few other architectures, and
it's true for them too.  E.g. m68k:

foo:
        jsr prandom_u32
        moveq #24,%d1
        lsr.l %d1,%d0
        rts

(68k is one architecture where the mask is faster than the shift,
so I could handle it separately, but it makes the code even uglier.
Basically, use masks for small ranges, and shifts for large ranges,
and an arch-dependent threshold that depends on the available
immediate constant range.)

ARM, PowerPC, and MIPS all have some hideously large function preamble
code, but the core is a right shift.  E.g.

foo:
.LFB1:
	.cfi_startproc
	stwu 1,-16(1)
	.cfi_def_cfa_offset 16
	mflr 0
	.cfi_register 65, 0
	bcl 20,31,.L2
.L2:
	stw 30,8(1)
	.cfi_offset 30, -8
	mflr 30
	addis 30,30,.LCTOC1-.L2@ha
	stw 0,20(1)
	addi 30,30,.LCTOC1-.L2@l
	.cfi_offset 65, 4
	bl prandom_u32+32768@plt
	lwz 0,20(1)
	lwz 30,8(1)
	addi 1,1,16
	.cfi_restore 30
	.cfi_def_cfa_offset 0
	srwi 3,3,24
	mtlr 0
	.cfi_restore 65
	blr

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ