[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAHmME9ohyKKX4Qg_dyGq36MxFkhBoVQYYgs8uUoCfBkJNqfX7Q@mail.gmail.com>
Date: Thu, 10 Mar 2022 13:59:05 -0700
From: "Jason A. Donenfeld" <Jason@...c4.com>
To: Eric Biggers <ebiggers@...nel.org>
Cc: LKML <linux-kernel@...r.kernel.org>,
Linux Crypto Mailing List <linux-crypto@...r.kernel.org>,
"Theodore Ts'o" <tytso@....edu>,
Dominik Brodowski <linux@...inikbrodowski.net>
Subject: Re: [PATCH v2] random: reseed more often immediately after booting
Hey Eric,
On Wed, Mar 9, 2022 at 9:57 PM Eric Biggers <ebiggers@...nel.org> wrote:
> > not more than once per 5 minutes.
>
> Break up the above into multiple paragraphs?
Will do.
>
> > +/*
> > + * Return whether the crng seed is considered to be sufficiently
> > + * old that a reseeding might be attempted. This is the case 5,
> > + * 10, 20, 40, 80, and 160 seconds after boot, and after if the
> > + * last reseeding was CRNG_RESEED_INTERVAL ago.
> > + */
> > +static bool crng_has_old_seed(void)
> > +{
> > + static unsigned int next_init_secs = 5;
> > +
> > + if (unlikely(next_init_secs < CRNG_RESEED_INTERVAL / HZ)) {
>
> The read of 'next_init_secs' needs READ_ONCE(), since it can be written to
> concurrently.
Thanks, will fix.
> However, one thing that seems a bit odd is that this method can result in two
> reseeds with very little time in between. For example, if no one is using the
> RNG from second 40-78, but then it is used in seconds 79-80, then it will be
> reseeded at both seconds 79 and 80 if there is entropy available.
I've been sort of going back and forth on this. I think the idea is
that there are two relative time measurements. The ordinary one we use
is time since last reseeding. But at boot, we're trying to account for
the fact that entropy is coming in relative to the init process of the
system, which means we need it to be relative to boot time rather than
relative to the last reseeding. As you point out, this is a little
wonky with how things are now, because we only ever reseed on usage.
To get closer to what I have in mind, we could reseed in a timer, so
that it hits it exactly on the 5,10,20,40,etc dot. But that seems a
bit cumbersome and maybe unnecessary. The effect of the behavior of
this v1 you pointed out is:
- We might reseed at 79, and then fail to reseed at 80. Consequence:
we lose 1 second of entropy that could have made it for that try.
- We might reseed at 79, and then also reseed at 80 too. Consequence:
that's a fairly quick refresh, but on the other hand, we're still
requiring 256 bit credits, so maybe not so bad, and if we've got so
much entropy coming in during that small period of time, maybe it
really isn't a concern.
So I'm not sure either of these cases really matter that much.
> Perhaps the condition should still be:
>
> time_after(jiffies, READ_ONCE(base_crng.birth) + interval);
>
> ... as it is in the non-early case, but where 'interval' is a function of
> 'uptime' rather than always CRNG_RESEED_INTERVAL? Maybe something like:
>
> interval = CRNG_RESEED_INTERVAL;
> if (uptime < 2 * CRNG_RESEED_INTERVAL / HZ)
> interval = max(5, uptime / 2) * HZ;
I'd experimented with things like that, for example making it exponential:
static bool early_boot = true;
unsigned long interval = CRNG_RESEED_INTERVAL;
if (unlikely(READ_ONCE(early_boot))) {
unsigned int uptime = min_t(u64, INT_MAX, ktime_get_seconds());
if (uptime >= CRNG_RESEED_INTERVAL / HZ)
WRITE_ONCE(early_boot, false);
else
interval = (5U << fls(uptime / 5)) * HZ;
}
return time_after(jiffies, READ_ONCE(base_crng.birth) + interval);
But the whole thing of combining relative-to-last-reseed with
relative-to-boottime seems really strange. I'm not quite sure what
that's supposed to represent, whereas what I have in v1 is
"exponentially increasing intervals from boottime" which is fairly
easy to understand.
Jason
Powered by blists - more mailing lists