[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170816032106.afkykcbthxjbk3l2@thunk.org>
Date: Tue, 15 Aug 2017 23:21:06 -0400
From: Theodore Ts'o <tytso@....edu>
To: Thomas Gleixner <tglx@...utronix.de>
Cc: Borislav Petkov <bp@...en8.de>, Ingo Molnar <mingo@...nel.org>,
Willy Tarreau <w@....eu>,
Linus Torvalds <torvalds@...ux-foundation.org>,
x86-ml <x86@...nel.org>, "Jason A. Donenfeld" <Jason@...c4.com>,
lkml <linux-kernel@...r.kernel.org>,
Peter Zijlstra <peterz@...radead.org>,
Nicholas Mc Guire <der.herr@...r.at>
Subject: Re: early x86 unseeded randomness
On Tue, Aug 15, 2017 at 04:42:47PM +0200, Thomas Gleixner wrote:
> Care to read the paper?
>
> We tried that 6 years ago on a wide range of machines from server to stupid
> first generation in order ATOM chips. All of them exposed more or less the
> same behaviour and passed RND validation tests.
Yeah, I read the paper, It points out that in a tight loop, they *did*
see real patterns in the TSC values that they read out. But then they
did better by adding usleep(delay). This is critical because this was
being done in userspace, when presumably there were other processes
running (kernel threads, if nothing else).
Whether this would still realistic in early boot, when interrupts may
not have been fully enabled, and many devices not even probed yet, and
when there are no other userspace processes to schedule against ---
and where you can't use usleep because the scheduler may not have been
initialized yet --- and where udelay is *way* more boring that usleep
--- I think is something where you can't rely on that paper.
So I think a lot of care is required here.....
- Ted
Powered by blists - more mailing lists