lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 27 Sep 2022 08:35:10 +0200
From:   Dominik Brodowski <linux@...inikbrodowski.net>
To:     "Jason A. Donenfeld" <Jason@...c4.com>
Cc:     linux-kernel@...r.kernel.org, linux-mm@...ck.org,
        kasan-dev@...glegroups.com, Kees Cook <keescook@...omium.org>,
        Andrew Morton <akpm@...ux-foundation.org>,
        stable@...r.kernel.org
Subject: Re: [PATCH v2 1/2] random: split initialization into early step and
 later step

Am Mon, Sep 26, 2022 at 11:31:29PM +0200 schrieb Jason A. Donenfeld:
> The full RNG initialization relies on some timestamps, made possible
> with general functions like time_init() and timekeeping_init(). However,
> these are only available rather late in initialization. Meanwhile, other
> things, such as memory allocator functions, make use of the RNG much
> earlier.
> 
> So split RNG initialization into two phases. We can give arch randomness
> very early on, and then later, after timekeeping and such are available,
> initialize the rest.
> 
> This ensures that, for example, slabs are properly randomized if RDRAND
> is available. Without this, CONFIG_SLAB_FREELIST_RANDOM=y loses a degree
> of its security, because its random seed is potentially deterministic,
> since it hasn't yet incorporated RDRAND. It also makes it possible to
> use a better seed in kfence, which currently relies on only the cycle
> counter.
> 
> Another positive consequence is that on systems with RDRAND, running
> with CONFIG_WARN_ALL_UNSEEDED_RANDOM=y results in no warnings at all.

Nice improvement. One question, though:

>  #if defined(LATENT_ENTROPY_PLUGIN)
>  	static const u8 compiletime_seed[BLAKE2S_BLOCK_SIZE] __initconst __latent_entropy;
> @@ -803,34 +798,46 @@ int __init random_init(const char *command_line)
>  			i += longs;
>  			continue;
>  		}
> -		entropy[0] = random_get_entropy();
> -		_mix_pool_bytes(entropy, sizeof(*entropy));
>  		arch_bits -= sizeof(*entropy) * 8;
>  		++i;
>  	}


Previously, random_get_entropy() was mixed into the pool ARRAY_SIZE(entropy)
times.

> +/*
> + * This is called a little bit after the prior function, and now there is
> + * access to timestamps counters. Interrupts are not yet enabled.
> + */
> +void __init random_init(void)
> +{
> +	unsigned long entropy = random_get_entropy();
> +	ktime_t now = ktime_get_real();
> +
> +	_mix_pool_bytes(utsname(), sizeof(*(utsname())));

But now, it's only mixed into the pool once. Is this change on purpose?

Thanks,
	Dominik

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ