lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAHmME9oesPzz4ofe-wo_ZViM=uahL6WQo8-5ov7xjJN8ui1rsg@mail.gmail.com>
Date:   Thu, 24 Feb 2022 10:49:12 +0100
From:   "Jason A. Donenfeld" <Jason@...c4.com>
To:     Dominik Brodowski <linux@...inikbrodowski.net>
Cc:     linux-kernel@...r.kernel.org, linux-crypto@...r.kernel.org,
        bigeasy@...utronix.de, Sultan Alsawaf <sultan@...neltoast.com>,
        Thomas Gleixner <tglx@...utronix.de>,
        Peter Zijlstra <peterz@...radead.org>,
        "Theodore Ts'o" <tytso@....edu>
Subject: Re: [PATCH] random: do crng pre-init loading in worker rather than irq

On 2/24/22, Dominik Brodowski <linux@...inikbrodowski.net> wrote:
> Am Wed, Feb 23, 2022 at 07:55:11PM +0100 schrieb Jason A. Donenfeld:
>> Taking spinlocks from IRQ context is problematic for PREEMPT_RT. That
>> is, in part, why we take trylocks instead. But apparently this still
>> trips up various lock dependency analyzers. That seems like a bug in the
>> analyzers that should be fixed, rather than having to change things
>> here.
>>
>> But maybe there's another reason to change things up: by deferring the
>> crng pre-init loading to the worker, we can use the cryptographic hash
>> function rather than xor, which is perhaps a meaningful difference when
>> considering this data has only been through the relatively weak
>> fast_mix() function.
>>
>> The biggest downside of this approach is that the pre-init loading is
>> now deferred until later, which means things that need random numbers
>> after interrupts are enabled, but before workqueues are running -- or
>> before this particular worker manages to run -- are going to get into
>> trouble. Hopefully in the real world, this window is rather small,
>> especially since this code won't run until 64 interrupts had occurred.
>>
>> Cc: Dominik Brodowski <linux@...inikbrodowski.net>
>> Cc: Sebastian Andrzej Siewior <bigeasy@...utronix.de>
>> Cc: Sultan Alsawaf <sultan@...neltoast.com>
>> Cc: Thomas Gleixner <tglx@...utronix.de>
>> Cc: Peter Zijlstra <peterz@...radead.org>
>> Cc: Theodore Ts'o <tytso@....edu>
>> Signed-off-by: Jason A. Donenfeld <Jason@...c4.com>
>> ---
>>  drivers/char/random.c | 62 ++++++++++++-------------------------------
>>  1 file changed, 17 insertions(+), 45 deletions(-)
>>
>> diff --git a/drivers/char/random.c b/drivers/char/random.c
>> index 536237a0f073..9fb06fc298d3 100644
>> --- a/drivers/char/random.c
>> +++ b/drivers/char/random.c
>> @@ -1298,7 +1278,12 @@ static void mix_interrupt_randomness(struct
>> work_struct *work)
>>  	local_irq_enable();
>>
>>  	mix_pool_bytes(pool, sizeof(pool));
>> -	credit_entropy_bits(1);
>> +
>> +	if (unlikely(crng_init == 0))
>> +		crng_pre_init_inject(pool, sizeof(pool), true);
>> +	else
>> +		credit_entropy_bits(1);
>> +
>>  	memzero_explicit(pool, sizeof(pool));
>>  }
>
> Might it make sense to call crng_pre_init_inject() before mix_pool_bytes?

What exactly is the difference you see mattering in the order? I keep
chasing my tail trying to think about it.

Jason

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ