[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Y3l6ocn1dTN0+1GK@zx2c4.com>
Date: Sun, 20 Nov 2022 01:53:53 +0100
From: "Jason A. Donenfeld" <Jason@...c4.com>
To: Eric Biggers <ebiggers@...nel.org>
Cc: linux-kernel@...r.kernel.org, patches@...ts.linux.dev,
linux-crypto@...r.kernel.org, x86@...nel.org,
Thomas Gleixner <tglx@...utronix.de>,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
Adhemerval Zanella Netto <adhemerval.zanella@...aro.org>,
Carlos O'Donell <carlos@...hat.com>
Subject: Re: [PATCH v5 2/3] random: introduce generic vDSO getrandom()
implementation
Hi Eric,
On Sat, Nov 19, 2022 at 03:10:12PM -0800, Eric Biggers wrote:
> > + if (IS_ENABLED(CONFIG_HAVE_VDSO_GETRANDOM))
> > + smp_store_release(&_vdso_rng_data.generation, next_gen + 1);
>
> Is the purpose of the smp_store_release() here to order the writes of
> base_crng.generation and _vdso_rng_data.generation? It could use a comment.
>
> > + if (IS_ENABLED(CONFIG_HAVE_VDSO_GETRANDOM))
> > + smp_store_release(&_vdso_rng_data.is_ready, true);
>
> Similarly, is the purpose of this smp_store_release() to order the writes to the
> the generation counters and is_ready? It could use a comment.
Yes, I guess so. Actually this comes from an unexplored IRC comment from
Andy back in July:
2022-07-29 21:21:56 <amluto> zx2c4: WRITE_ONCE(_vdso_rng_data.generation, next_gen + 1);
2022-07-29 21:22:23 <amluto> For x86 it shouldn’t matter much. For portability, smp_store_release
Though maybe that doesn't actually matter much? When the userspace CPU
learns about a change to vdso_rng_data, it's only course of action is
make a syscall to getrandom(), anyway, and those paths should be
consistent with themselves, thanks to the same locking and
synchronization there's always been there. So maybe I actually should
move back to WRITE_ONCE() here? Hm?
> > +static void memcpy_and_zero(void *dst, void *src, size_t len)
> > +{
> > +#define CASCADE(type) \
> > + while (len >= sizeof(type)) { \
> > + *(type *)dst = *(type *)src; \
> > + *(type *)src = 0; \
> > + dst += sizeof(type); \
> > + src += sizeof(type); \
> > + len -= sizeof(type); \
> > + }
> > +#ifdef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS
> > +#if BITS_PER_LONG == 64
> > + CASCADE(u64);
> > +#endif
> > + CASCADE(u32);
> > + CASCADE(u16);
> > +#endif
> > + CASCADE(u8);
> > +#undef CASCADE
> > +}
>
> CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS doesn't mean that dereferencing
> misaligned pointers is okay. You still need to use get_unaligned() and
> put_unaligned(). Take a look at crypto_xor(), for example.
Right, thanks. Will do.
> There's a lot of subtle stuff going on here. Adding some more comments would be
> helpful. Maybe bring in some of the explanation that's currently only in the
> commit message.
Good idea.
> One question I have is about forking. So, when a thread calls fork(), in the
> child the kernel automatically replaces all vgetrandom_state pages with zeroed
> pages (due to MADV_WIPEONFORK). If the child calls __cvdso_getrandom_data()
> afterwards, it sees the zeroed state. But that's indistinguishable from the
> state at the very beginning, after sys_vgetrandom_alloc() was just called,
> right? So as long as this code handles initializing the state at the beginning,
> then I'd think it would naturally handle fork() as well.
Right, for this simple fork() case, it works fine. There are other cases
though that are trickier...
> However, it seems you have something a bit more subtle in mind, where the thread
> calls fork() *while* it's in the middle of __cvdso_getrandom_data(). I guess
> you are thinking of the case where a signal is sent to the thread while it's
> executing __cvdso_getrandom_data(), and then the signal handler calls fork()?
> Note that it doesn't matter if a different thread in the *process* calls fork().
>
> If it's possible for the thread to fork() (and hence for the vgetrandom_state to
> be zeroed) at absolutely any time, it probably would be a good idea to mark that
> whole struct as volatile.
Actually, this isn't something that matters, I don't think. If
state->key_batch is zeroed, the result will be wrong, but the function
logic will be fine. If state->pos is zeroed, it'll write to the
beginning of the batch, which might be wrong, but the function logic
will still be fine. That is, in both of these cases, even if the
calculation is wrong, there's no memory corruption or anything. So then,
the remaining member is state->generation. If this is zeroed, then it's
actually something we detect with that READ_ONCE()! And in this case,
it's a sign that something is off -- we forked -- and so we should start
over from the beginning. So I don't think there's a reason to mark the
whole struct as volatile. The one we care about is state->generation,
and for that we READ_ONCE() it at the place that matters.
There's actually a different scenario, though, that I'm concerned about,
and this is the case in which a multithreaded program forks in the
middle of one of its threads running this. Indeed, only the calling
thread will carry forward into the child process, but all the memory is
still left around from any concurrent threads in the middle of
vgetrandom(). And if they're in the middle of a vgetrandom() call, that
means they haven't yet done erasure and cleaned up the stack to prevent
their state from leaking, and so forward secrecy is potentially lost,
since the child process now has some state from the parent.
I'm not quite sure what the best approach here is. One idea would be to
just note that libcs should wait until vgetrandom() has returned
everywhere before forking, using its atfork functionality. Another
approach would be to say that multithreaded programs using this
shouldn't fork or something, but that seems disappointing. Or more state
could be allocated in the zeroing region, to hold a chacha state, so
another 64 bytes, which would be sort of unfortunate. Or something else?
I'd be interested to hear your impression of this quandary.
Jason
Powered by blists - more mailing lists