lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aQ6EvdukQytvqK-u@zx2c4.com>
Date: Sat, 8 Nov 2025 00:46:05 +0100
From: "Jason A. Donenfeld" <Jason@...c4.com>
To: Thomas Weißschuh <thomas.weissschuh@...utronix.de>
Cc: Andy Lutomirski <luto@...nel.org>, Thomas Gleixner <tglx@...utronix.de>,
	Vincenzo Frascino <vincenzo.frascino@....com>,
	Arnd Bergmann <arnd@...db.de>,
	"David S. Miller" <davem@...emloft.net>,
	Andreas Larsson <andreas@...sler.com>,
	Nick Alcock <nick.alcock@...cle.com>,
	John Stultz <jstultz@...gle.com>, Stephen Boyd <sboyd@...nel.org>,
	John Paul Adrian Glaubitz <glaubitz@...sik.fu-berlin.de>,
	Shuah Khan <shuah@...nel.org>,
	Catalin Marinas <catalin.marinas@....com>,
	Will Deacon <will@...nel.org>, Theodore Ts'o <tytso@....edu>,
	Russell King <linux@...linux.org.uk>,
	Madhavan Srinivasan <maddy@...ux.ibm.com>,
	Michael Ellerman <mpe@...erman.id.au>,
	Nicholas Piggin <npiggin@...il.com>,
	Christophe Leroy <christophe.leroy@...roup.eu>,
	Huacai Chen <chenhuacai@...nel.org>,
	WANG Xuerui <kernel@...0n.name>,
	Thomas Bogendoerfer <tsbogend@...ha.franken.de>,
	Heiko Carstens <hca@...ux.ibm.com>,
	Vasily Gorbik <gor@...ux.ibm.com>,
	Alexander Gordeev <agordeev@...ux.ibm.com>,
	Christian Borntraeger <borntraeger@...ux.ibm.com>,
	Sven Schnelle <svens@...ux.ibm.com>,
	Shannon Nelson <sln@...main.com>, linux-kernel@...r.kernel.org,
	sparclinux@...r.kernel.org, linux-kselftest@...r.kernel.org,
	linux-arm-kernel@...ts.infradead.org, linuxppc-dev@...ts.ozlabs.org,
	loongarch@...ts.linux.dev, linux-mips@...r.kernel.org,
	linux-s390@...r.kernel.org
Subject: Re: [PATCH v5 19/34] random: vDSO: only access vDSO datapage after
 random_init()

Hi Thomas,

I'm not a huge fan of this change:

On Thu, Nov 06, 2025 at 11:02:12AM +0100, Thomas Weißschuh wrote:
> +static DEFINE_STATIC_KEY_FALSE(random_vdso_is_ready);
>  
>  /* Control how we warn userspace. */
>  static struct ratelimit_state urandom_warning =
> @@ -252,6 +253,9 @@ static void random_vdso_update_generation(unsigned long next_gen)
>  	if (!IS_ENABLED(CONFIG_VDSO_GETRANDOM))
>  		return;
>  
> +	if (!static_branch_likely(&random_vdso_is_ready))
> +		return;
> +
>  	/* base_crng.generation's invalid value is ULONG_MAX, while
>  	 * vdso_k_rng_data->generation's invalid value is 0, so add one to the
>  	 * former to arrive at the latter. Use smp_store_release so that this
> @@ -274,6 +278,9 @@ static void random_vdso_set_ready(void)
>  	if (!IS_ENABLED(CONFIG_VDSO_GETRANDOM))
>  		return;
>  
> +	if (!static_branch_likely(&random_vdso_is_ready))
> +		return;
> +
>  	WRITE_ONCE(vdso_k_rng_data->is_ready, true);
>  }
>  
> @@ -925,6 +932,9 @@ void __init random_init(void)
>  	_mix_pool_bytes(&entropy, sizeof(entropy));
>  	add_latent_entropy();
>  
> +	if (IS_ENABLED(CONFIG_VDSO_GETRANDOM))
> +		static_branch_enable(&random_vdso_is_ready);
> +
>  	/*
>  	 * If we were initialized by the cpu or bootloader before jump labels
>  	 * or workqueues are initialized, then we should enable the static
> @@ -934,8 +944,10 @@ void __init random_init(void)
>  		crng_set_ready(NULL);
>  
>  	/* Reseed if already seeded by earlier phases. */
> -	if (crng_ready())
> +	if (crng_ready()) {
>  		crng_reseed(NULL);
> +		random_vdso_set_ready();
> +	}

The fact that the vdso datapage is set up by the time random_init() is
called seems incredibly contingent on init details. Why not, instead,
make this a necessary part of the structure of vdso setup code, which
can actually know about what happens when? For example, one clean way of
doing that would be to make vdso_k_rng_data always valid by having it
initially point to __initdata memory, and then when it's time to
initialize the real datapage, memcpy() the __initdata memory to the new
specially allocated memory. Then we don't need the complex state
tracking that this commit and the prior one introduce.

Jason

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ