[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190417062454.GA45199@gmail.com>
Date: Wed, 17 Apr 2019 08:24:54 +0200
From: Ingo Molnar <mingo@...nel.org>
To: Theodore Ts'o <tytso@....edu>,
David Laight <David.Laight@...LAB.COM>,
'Peter Zijlstra' <peterz@...radead.org>,
"Reshetova, Elena" <elena.reshetova@...el.com>,
Daniel Borkmann <daniel@...earbox.net>,
"luto@...nel.org" <luto@...nel.org>,
"luto@...capital.net" <luto@...capital.net>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"jpoimboe@...hat.com" <jpoimboe@...hat.com>,
"keescook@...omium.org" <keescook@...omium.org>,
"jannh@...gle.com" <jannh@...gle.com>,
"Perla, Enrico" <enrico.perla@...el.com>,
"mingo@...hat.com" <mingo@...hat.com>,
"bp@...en8.de" <bp@...en8.de>,
"tglx@...utronix.de" <tglx@...utronix.de>,
"gregkh@...uxfoundation.org" <gregkh@...uxfoundation.org>
Subject: Re: [PATCH] x86/entry/64: randomize kernel stack offset upon syscall
* Theodore Ts'o <tytso@....edu> wrote:
> It seems though the assumption that we're assuming the attacker has
> arbitrary ability to get the low bits of the stack, so *if* that's
> true, then eventually, you'd be able to get enough samples that you
> could reverse engineer the prandom state. This could take long enough
> that the process will have gotten rescheduled to another CPU, and since
> the prandom state is per-cpu, that adds another wrinkle.
Yeah.
Note that if the attacker has this level of local access then they can
probably also bind the task to a CPU, which would increase the
statistical stability of any attack. Plus with millions of system calls
per second executed in an attack, each of which system call exposes a
couple of bits of prandom state, I'm pretty sure some prandom attack
exists that can make the extraction of the full internal state probable
within the ~60 seconds reseeding interval. (Is there any research on this
perhaps, or do researchers not even bother, because this isn't really a
secure algorithm in any reasonable meaning of the word?)
Thanks,
Ingo
Powered by blists - more mailing lists