lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Sat, 8 Aug 2020 11:13:20 -0700
From:   Linus Torvalds <torvalds@...ux-foundation.org>
To:     Andy Lutomirski <luto@...capital.net>
Cc:     George Spelvin <lkml@....org>, Netdev <netdev@...r.kernel.org>,
        Willy Tarreau <w@....eu>, Amit Klein <aksecurity@...il.com>,
        Eric Dumazet <edumazet@...gle.com>,
        "Jason A. Donenfeld" <Jason@...c4.com>,
        Andrew Lutomirski <luto@...nel.org>,
        Kees Cook <keescook@...omium.org>,
        Thomas Gleixner <tglx@...utronix.de>,
        Peter Zijlstra <peterz@...radead.org>,
        "Theodore Ts'o" <tytso@....edu>,
        Marc Plumb <lkml.mplumb@...il.com>,
        Stephen Hemminger <stephen@...workplumber.org>
Subject: Re: Flaw in "random32: update the net random state on interrupt and activity"

On Sat, Aug 8, 2020 at 10:07 AM Andy Lutomirski <luto@...capital.net> wrote:
>
>
> > On Aug 8, 2020, at 8:29 AM, George Spelvin <lkml@....org> wrote:
> >
>
> > And apparently switching to the fastest secure PRNG currently
> > in the kernel (get_random_u32() using ChaCha + per-CPU buffers)
> > would cause too much performance penalty.
>
> Can someone explain *why* the slow path latency is particularly relevant here?

You guys know what's relevant?  Reality.

The thing is, I see Spelvin's rant, and I've generally enjoyed our
encounters in the past, but I look back at the kinds of security
problems we've had, and they are the exact _reverse_ of what
cryptographers and Spelvin is worried about.

The fact is, nobody ever EVER had any practical issues with our
"secure hash function" even back when it was MD5, which is today
considered trivially breakable.

Thinking back on it, I don't think it was even md5. I think it was
half-md5, wasn't it?

So what have people have had _real_ security problems with in our
random generators - pseudo or not?

EVERY SINGLE problem I can remember was because some theoretical
crypto person said "I can't guarantee that" and removed real security
- or kept it from being merged.

Seriously.

We've had absolutely horrendous performance issues due to the
so-called "secure" random number thing blocking. End result: people
didn't use it.

We've had absolutely garbage fast random numbers because the crypto
people don't think performance matters, so the people who _do_ think
it matters just roill their own.

We've had "random" numbers that weren't, because random.c wanted to
use only use inputs it can analyze and as a result didn't use any
input at all, and every single embedded device ended up having the
exact same state (this ends up being intertwined with the latency
thing).

We've had years of the drivers/char/random.c not getting fixed because
Ted listens too much to the "serious crypto" guys, with the end result
that I then have to step in and say "f*ck that, we're doing this
RIGHT".

And RIGHT means caring about reality, not theory.

So instead of asking "why is the slow path relevant", the question
should be "why is some theoretical BS relevant", when history says it
has never ever mattered.

Our random number generation needs to be _practical_.

When Spelvin and Marc complain about us now taking the fastpool data
and adding it to the prandom pool and "exposing" it, they are full of
shit. That fastpool data gets whitened so much by two layers of
further mixing, that there is no way you'll ever see any sign of us
taking one word of it for other things.

I want to hear _practical_ attacks against this whole "let's mix in
the instruction pointer and the cycle counter into both the 'serious'
random number generator and the prandom one".

Because I don't think they exist. And I think it's actively dangerous
to continue on the path of thinking that stable and "serious"
algorithms that can be analyzed are better than the one that end up
depending on random hardware details.

I really think kernel random people need to stop this whole "in
theory" garbage, and embrace the "let's try to mix in small
non-predictable things in various places to actively make it hard to
analyze this".

Because at some point, "security by obscurity" is actually better than
"analyze this".

And we have decades of history to back this up. Stop with the crazy
"let's only use very expensive stuff has been analyzed".

It's actively detrimental.

It's detrimental to performance, but it's also detrimental to real security.

And stop dismissing performance as "who cares". Because it's just
feeding into this bad loop.

               Linus

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ