lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Wed, 9 Feb 2022 01:21:42 +0100
From:   "Jason A. Donenfeld" <Jason@...c4.com>
To:     Eric Biggers <ebiggers@...nel.org>
Cc:     LKML <linux-kernel@...r.kernel.org>,
        Linux Crypto Mailing List <linux-crypto@...r.kernel.org>,
        "Theodore Ts'o" <tytso@....edu>,
        Dominik Brodowski <linux@...inikbrodowski.net>,
        Sebastian Andrzej Siewior <bigeasy@...utronix.de>
Subject: Re: [PATCH v1 6/7] random: use simpler fast key erasure flow on
 per-cpu keys

Hey Eric,

Thanks a bunch for the review.

On Wed, Feb 9, 2022 at 12:39 AM Eric Biggers <ebiggers@...nel.org> wrote:
> Previously, the extracted entropy was being XOR'ed into the ChaCha key.  Now the
> key is just being overwritten.  This is the approach we should be aiming for,
> but I'm concerned about the fact that add_interrupt_randomness() still sometimes
> adds entropy to the ChaCha key directly without mixing it into the entropy pool.
> That happens via crng_fast_load() when crng_init == 0.  This new approach will
> destroy any entropy that was present in the key only.
>
> The right fix, IMO, would be to always send entropy through the entropy pool.
> Until that is done, though, I'm not sure it's a good idea to overwrite the key
> like this.

I agree with this in principle, and I've been trying to think of a
good way to do it, but it's a bit hard to do, because of the workqueue
deferred dumping. I'll try to see if I can figure out something there.
In practice, it's not _such_ a big deal, as it's "only" 64 credit-bits
of entropy we're talking about. A sort of hacky but maybe a
satisfactory solution would be to hash the base_crng key into the
pool, once, just at the transition point. I'll think a bit more on it.

>
> Unrelated: this function could use some comments that explain what is going on.

Will do.

>
> > +/*
> > + * The general form here is based on a "fast key erasure RNG" from:
> > + * https://blog.cr.yp.to/20170723-random.html
> > + */
> > +static void crng_fast_key_erasure(u8 key[CHACHA_KEY_SIZE],
> > +                               u32 chacha_state[CHACHA_STATE_WORDS],
> > +                               u8 random_data[32], size_t random_data_len)
>
> Given that random_data is variable-length it should be a 'u8 *'.

I'll do that and mention the size aspect in the comment on top.

>
> Also, the above comment could use some more explanation.  I.e. what does this
> function actually *do*.  (I understand it, but a comment will really help any
> future readers...)

Will do.

>
> > + /*
> > + * This function returns a ChaCha block that you may use for generating
>
> ChaCha state, not ChaCha block.

Thanks.

>
> > + * random data. It also returns up to 32 bytes on its own of random data
> > + * that may be used.
> > +*/
> > +static void crng_make_state(u32 chacha_state[CHACHA_STATE_WORDS],
> > +                         u8 random_data[32], size_t random_data_len)
>
> Likewise, it's weird having random_data be declared as length 32 when it's
> actually variable-length.

Will fix.

> > +     /* For the fast path, we check this unlocked first. */
> > +     if (unlikely(!crng_ready())) {
> > +             bool not_ready;
> > +
> > +             spin_lock_irqsave(&base_crng.lock, flags);
> > +             /* Now that we're locked, re-check. */
> > +             not_ready = !crng_ready();
> > +             if (not_ready)
> > +                     crng_fast_key_erasure(base_crng.key, chacha_state,
> > +                                           random_data, random_data_len);
> > +             spin_unlock_irqrestore(&base_crng.lock, flags);
> > +             if (!not_ready)
> > +                     return;
> > +     }
>
> Isn't the '!not_ready' check above backwards?  And in case case, doubly-inverted
> logic like that should be avoided.

You're right; it's all screwed up. I'll call it "ready" like it should
be and fix the logic on that last if statement. Thank you for pointing
this one out.

>
> And again, a few more comments please :-)

I'll pepper it up.

>
> >  struct batched_entropy {
> >       union {
> > -             u64 entropy_u64[CHACHA_BLOCK_SIZE / sizeof(u64)];
> > -             u32 entropy_u32[CHACHA_BLOCK_SIZE / sizeof(u32)];
> > +             u64 entropy_u64[CHACHA_BLOCK_SIZE * 3 / (2 * sizeof(u64))];
> > +             u32 entropy_u32[CHACHA_BLOCK_SIZE * 3 / (2 * sizeof(u32))];
> >       };
>
> The above numbers could use an explanation.

It's 32 bytes from fast key erasure + one full chacha block. I'll
leave a big comment there explaining this.

> There's also a comment at the top of the file that still claims that
> get_random_int() et al. don't implement backtracking protection.  That needs to
> be updated.

Will do. Saw this too after submitting.

Thanks again for your review!

Jason

Powered by blists - more mailing lists