[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <2236FBA76BA1254E88B949DDB74E612BA4CABA56@IRSMSX102.ger.corp.intel.com>
Date: Wed, 29 May 2019 10:13:43 +0000
From: "Reshetova, Elena" <elena.reshetova@...el.com>
To: Theodore Ts'o <tytso@....edu>
CC: Kees Cook <keescook@...omium.org>, Ingo Molnar <mingo@...nel.org>,
"Andy Lutomirski" <luto@...nel.org>,
David Laight <David.Laight@...lab.com>,
"Eric Biggers" <ebiggers3@...il.com>,
"ebiggers@...gle.com" <ebiggers@...gle.com>,
"herbert@...dor.apana.org.au" <herbert@...dor.apana.org.au>,
Peter Zijlstra <peterz@...radead.org>,
Daniel Borkmann <daniel@...earbox.net>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"jpoimboe@...hat.com" <jpoimboe@...hat.com>,
"jannh@...gle.com" <jannh@...gle.com>,
"Perla, Enrico" <enrico.perla@...el.com>,
"mingo@...hat.com" <mingo@...hat.com>,
"bp@...en8.de" <bp@...en8.de>,
"tglx@...utronix.de" <tglx@...utronix.de>,
"gregkh@...uxfoundation.org" <gregkh@...uxfoundation.org>,
"Edgecombe, Rick P" <rick.p.edgecombe@...el.com>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Peter Zijlstra <a.p.zijlstra@...llo.nl>
Subject: RE: [PATCH] x86/entry/64: randomize kernel stack offset upon syscall
> I confess I've kind of lost the plot on the performance requirements
> at this point. Instead of measuring and evaluating potential
> solutions, can we try to approach this from the opposite direction and
> ask what the requirements are?
>
> What's the maximum number of CPU cycles that we are allowed to burn
> such that we can meet the 1-2% overhead?
This is a good question on which I have no answer, so I tried to play with
performance numbers as much as I know. I don't know what is
considered acceptable for a syscall path and I guess answer can also
be presented in two security configurations, like weak (less CPU cycles)
and strong (more cycles).
>
> And how many bits of uncertainty are we trying to present to the
> attacker?
I think currently we are talking about 5 bits. Ideally maybe 8 bits,
given that offset is new for each syscall, it should be enough to remove
the "stability and predictability" feature of kernel thread stack that
attackers rely on. If your chances of crashing kernel 255 from 256,
you might figure some other attack path instead.
What's the minimum beyond we shouldn't bother? (Perhaps
> because rdtsc will give us that many bits?)
Again, it is all about probabilities. 4 bits gives us success chance of
1 in 16 (of not crashing kernel), this is already starting to be dangerous
in my view, so maybe no less than 5?
And does that change if
> we vary the reseed window in terms of the number of system calls
> between reseeding?
I think if we talk about prng, reseeding should be done periodically to
make sure we never overrun the safe generation period and also for
additional security (seeding with proper entropy).
But the most important part is to have a distinct offset (random bits) between
each consequent syscall.
> And what are the ideal parameters after which point we're just gilding
> the lily?
Not sure about ideal params for the whole combination here since security
and performance are basically conflicting with each other (as usual).
So, that's why I was trying to propose to have two version of this:
- one with tilt towards performance (rdtsc based)
- one with tilt towards security (CRNG-based)
And then let users choose what matters more for their workload.
For normal things like dektops, etc. CRNG based version won't provide
any noticeable overhead. It might only matter for syscall sensitive workloads,
which btw, most likely not enable quite a bunch of other security protections,
so I would say that for them to have even rdtsc() version is actually an
improvement in their defenses for stack (and basically no impact on performance).
On related note: the current prng we have in kernel (prandom) is based on a
*very old* style of prngs, which is basically 4 linear LFSRs xored together.
Nowadays, we have much more powerful prngs that show much better
statistical and even security properties (not cryptographically secure, but still
not so linear like the one above).
What is the reason why we still use a prng that is couple of decades away from the
state of art in the area? It is actively used, especially by network stack,
should we update it to smth that is more appropriate (speed would be comparable)?
I am mostly talking about PCG-based generators:
http://www.pcg-random.org/
If people are interested, I could put together a PoC and we have an expert here we can
consult for providing calculations for min-entropy, HILL entropy and whatever
is requested.
Best Regards,
Elena.
Powered by blists - more mailing lists