lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190502150853.GA16779@gmail.com>
Date:   Thu, 2 May 2019 17:08:53 +0200
From:   Ingo Molnar <mingo@...nel.org>
To:     Andy Lutomirski <luto@...nel.org>
Cc:     David Laight <David.Laight@...lab.com>,
        "Reshetova, Elena" <elena.reshetova@...el.com>,
        Theodore Ts'o <tytso@....edu>,
        Eric Biggers <ebiggers3@...il.com>,
        "ebiggers@...gle.com" <ebiggers@...gle.com>,
        "herbert@...dor.apana.org.au" <herbert@...dor.apana.org.au>,
        Peter Zijlstra <peterz@...radead.org>,
        "keescook@...omium.org" <keescook@...omium.org>,
        Daniel Borkmann <daniel@...earbox.net>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        "jpoimboe@...hat.com" <jpoimboe@...hat.com>,
        "jannh@...gle.com" <jannh@...gle.com>,
        "Perla, Enrico" <enrico.perla@...el.com>,
        "mingo@...hat.com" <mingo@...hat.com>,
        "bp@...en8.de" <bp@...en8.de>,
        "tglx@...utronix.de" <tglx@...utronix.de>,
        "gregkh@...uxfoundation.org" <gregkh@...uxfoundation.org>,
        "Edgecombe, Rick P" <rick.p.edgecombe@...el.com>,
        Linus Torvalds <torvalds@...ux-foundation.org>,
        Peter Zijlstra <a.p.zijlstra@...llo.nl>
Subject: Re: [PATCH] x86/entry/64: randomize kernel stack offset upon syscall


* Andy Lutomirski <luto@...nel.org> wrote:

> Or we decide that calling get_random_bytes() is okay with IRQs off and 
> this all gets a bit simpler.

BTW., before we go down this path any further, is the plan to bind this 
feature to a real CPU-RNG capability, i.e. to the RDRAND instruction, 
which excludes a significant group of x86 of CPUs?

Because calling tens of millions of system calls per second will deplete 
any non-CPU-RNG sources of entropy and will also starve all other users 
of random numbers, which might have a more legitimate need for 
randomness, such as the networking stack ...

I.e. I'm really *super sceptical* of this whole plan, as currently 
formulated.

If we bind it to RDRAND then we shouldn't be using the generic 
drivers/char/random.c pool *at all*, but just call the darn instruction 
directly. This is an x86 patch-set after all, right?

Furthermore the following post suggests that RDRAND isn't a per CPU 
capability, but a core or socket level facility, depending on CPU make:

  https://stackoverflow.com/questions/10484164/what-is-the-latency-and-throughput-of-the-rdrand-instruction-on-ivy-bridge

8 gigabits/sec sounds good throughput in principle, if there's no 
scalability pathologies with that.

It would also be nice to know whether RDRAND does buffering *internally*, 
in which case it might be better to buffer as little at the system call 
level as possible, to allow the hardware RNG buffer to rebuild between 
system calls.

I.e. I'd suggest to retrieve randomness via a fixed number of RDRAND-r64 
calls (where '1' is a perfectly valid block size - it should be 
measured), which random bits are then used as-is for the ~6 bits of 
system call stack offset. (I'd even suggest 7 bits: that skips a full 
cache line almost for free and makes the fuzz actually meaningful: no 
spear attacker will take a 1/128, 0.8% chance to successfully attack a 
critical system.)

Then those 64*N random bits get buffered and consumed in 5-7 bit chunk, 
in a super efficient fashion, possibly inlining the fast path, totally 
outside the flow of the drivers/char/random.c

Any non-CPU source of randomness for system calls and plans to add 
several extra function calls to every x86 system call is crazy talk I 
believe...

Thanks,

	Ingo

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ