lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAHmME9prO9dop7iBRwN54=GMtLH7amS3m_VqGUzL44G1h=R+2A@mail.gmail.com>
Date:   Thu, 17 Feb 2022 18:53:22 +0100
From:   "Jason A. Donenfeld" <Jason@...c4.com>
To:     Sebastian Andrzej Siewior <bigeasy@...utronix.de>
Cc:     Linux Crypto Mailing List <linux-crypto@...r.kernel.org>,
        LKML <linux-kernel@...r.kernel.org>,
        Thomas Gleixner <tglx@...utronix.de>,
        Peter Zijlstra <peterz@...radead.org>,
        "Theodore Ts'o" <tytso@....edu>,
        Sultan Alsawaf <sultan@...neltoast.com>,
        Dominik Brodowski <linux@...inikbrodowski.net>
Subject: Re: [PATCH v5] random: clear fast pool, crng, and batches in cpuhp
 bring up

Hi Sebastian,

Thank you for finding the time to review this v5.

On Thu, Feb 17, 2022 at 6:44 PM Sebastian Andrzej Siewior
<bigeasy@...utronix.de> wrote:
> So I think this is the latest, right?

Yes.

> What do you think about this small comment update? :)

I can improve the comments for v6 of this patch, yes. I won't use your
text exactly, as there are other errors in it, but I'll synthesize its
meaning.

> > +#ifdef CONFIG_SMP
> > +/*
> > + * This function is called by the cpuhp system, wired up via the large
> > + * static array in kernel/cpu.c, with the entry CPUHP_RANDOM_PREPARE.
> > + */
> > +int random_prepare_cpu(unsigned int cpu)
> > +{
> > +     /*
> > +      * When the cpu comes back online, immediately invalidate both
> > +      * the per-cpu crng and all batches, so that we serve fresh
> > +      * randomness.
> > +      */
> > +     per_cpu_ptr(&crngs, cpu)->generation = ULONG_MAX;
> > +     per_cpu_ptr(&batched_entropy_u32, cpu)->position = UINT_MAX;
> > +     per_cpu_ptr(&batched_entropy_u64, cpu)->position = UINT_MAX;
>
> This runs before the CPU is up. Could you do the initialisation right
> now?

That wouldn't accomplish anything. See below.

> My problem here is that if this (get_random_u32()) is used between
> CPUHP_AP_IDLE_DEAD and CPUHP_TEARDOWN_CPU then the initialisation will
> happen on the target CPU with disabled interrupts. And will acquire a
> sleeping lock (batched_entropy_u32.lock).

That is not a legitimate problem to be addressed in any way at all by
this patchset. The batches may well be already depleted and the crng
already old, and therefore the "problem" you note is the same both
before and after this patch. If you want to address that, send a
separate patch for it.

> You could perform the initialization cross CPU without the lock because
> the CPU itself isn't ready yet. Something like
>          batch = per_cpu_ptr(&batched_entropy_u32, cpu);
>          _get_random_bytes(batch->entropy_u32, sizeof(batch->entropy_u32));
>          batch->position = 0;
>          batch->generation = next_gen;

I guess, but it wouldn't solve anything. The entire batch could be
filled and then subsequently emptied out before irqs are up, and your
problem will just repeat itself. I'm not going to make any changes
related to that in this patch.

If you find out that there are actual users of get_random_{...} during
that window, and think that this represents a real problem, please
send a patch and we can discuss that then.

I'll send a v6 with comments fixed to your liking. I hope that you can ack it.

Jason

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ