[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.20.1708151927520.2072@nanos>
Date: Tue, 15 Aug 2017 19:37:05 +0200 (CEST)
From: Thomas Gleixner <tglx@...utronix.de>
To: Theodore Ts'o <tytso@....edu>
cc: Borislav Petkov <bp@...en8.de>, Ingo Molnar <mingo@...nel.org>,
Willy Tarreau <w@....eu>,
Linus Torvalds <torvalds@...ux-foundation.org>,
x86-ml <x86@...nel.org>, "Jason A. Donenfeld" <Jason@...c4.com>,
lkml <linux-kernel@...r.kernel.org>,
Peter Zijlstra <peterz@...radead.org>,
Nicholas Mc Guire <der.herr@...r.at>
Subject: Re: early x86 unseeded randomness
On Tue, 15 Aug 2017, Thomas Gleixner wrote:
> On Tue, 15 Aug 2017, Theodore Ts'o wrote:
> > On Tue, Aug 15, 2017 at 03:48:18PM +0200, Thomas Gleixner wrote:
> > > > > +u64 __init tsc_early_random(void)
> > > > > +{
> > > > > + u64 uninitialized_var(res);
> > > > > + int i;
> > > > > +
> > > > > + if (!boot_cpu_has(X86_FEATURE_TSC))
> > > > > + return res;
> > > > > +
> > > > > + res ^= rdtsc();
> > > > > + for (i = 0; i < BITS_PER_LONG; i++) {
> > > > > + res ^= ((rdtsc() & 0x04) >> 2) << i;
> > > > > + udelay(2);
> > > > > + }
> > > > > + return res;
> > > > > +}
> >
> > Reasons why this is probably not the best idea:
> >
> > 1) Exactly how udelay is implemented varies from architecture to
> > architecture and in some cases is different on a subarchitectural
> > level. Some of them rely on reading the TSC; others rely on
> > operations that will have a constant number of CPU cycles (e.g., they
> > aren't doing much if any operations that might even have a tiny
> > glimmer of hope of adding unpredictability).
>
> That's not really true. You can add random shite instead of udelay(2). The
> point of this exercise is to somewhat utilize the instruction pipeline,
> which causes the TSC readouts to be not even spread over a the loop and
> therefor yield random results.
Talking about random shite:
memset(foo, 0, sizeof(foo));
res ^= rdtsc();
for (i = 0; i < BITS_PER_LONG; i++) {
/* Will never happen ... */
if (memchr_inv(foo, i, sizeof(foo)))
continue;
res ^= ((rdtsc() & 0x04) >> 2) << i;
memset(foo, i, sizeof(foo));
wbinvd();
}
return res;
That exploits the fact that the CPU and caches run at a different non
synchronized clock than the memory controller and therefore the execution
time for both the wbinvd() and the memchr_inv() measured in TSC cycles is
non constant and random enough for the early boot randomization.
Thanks,
tglx
Powered by blists - more mailing lists