[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <2236FBA76BA1254E88B949DDB74E612BA4C0EA9C@IRSMSX102.ger.corp.intel.com>
Date: Wed, 20 Mar 2019 12:10:47 +0000
From: "Reshetova, Elena" <elena.reshetova@...el.com>
To: Josh Poimboeuf <jpoimboe@...hat.com>,
Andy Lutomirski <luto@...nel.org>
CC: Kees Cook <keescook@...omium.org>, Jann Horn <jannh@...gle.com>,
"Perla, Enrico" <enrico.perla@...el.com>,
Ingo Molnar <mingo@...hat.com>,
"Borislav Petkov" <bp@...en8.de>,
Thomas Gleixner <tglx@...utronix.de>,
LKML <linux-kernel@...r.kernel.org>,
Peter Zijlstra <peterz@...radead.org>,
"Greg KH" <gregkh@...uxfoundation.org>
Subject: RE: [RFC PATCH] x86/entry/64: randomize kernel stack offset upon
syscall
> On Mon, Mar 18, 2019 at 01:15:44PM -0700, Andy Lutomirski wrote:
> > On Mon, Mar 18, 2019 at 2:41 AM Elena Reshetova
> > <elena.reshetova@...el.com> wrote:
> > >
> > > If CONFIG_RANDOMIZE_KSTACK_OFFSET is selected,
> > > the kernel stack offset is randomized upon each
> > > entry to a system call after fixed location of pt_regs
> > > struct.
> > >
> > > This feature is based on the original idea from
> > > the PaX's RANDKSTACK feature:
> > > https://pax.grsecurity.net/docs/randkstack.txt
> > > All the credits for the original idea goes to the PaX team.
> > > However, the design and implementation of
> > > RANDOMIZE_KSTACK_OFFSET differs greatly from the RANDKSTACK
> > > feature (see below).
> > >
> > > Reasoning for the feature:
> > >
> > > This feature aims to make considerably harder various
> > > stack-based attacks that rely on deterministic stack
> > > structure.
> > > We have had many of such attacks in past [1],[2],[3]
> > > (just to name few), and as Linux kernel stack protections
> > > have been constantly improving (vmap-based stack
> > > allocation with guard pages, removal of thread_info,
> > > STACKLEAK), attackers have to find new ways for their
> > > exploits to work.
> > >
> > > It is important to note that we currently cannot show
> > > a concrete attack that would be stopped by this new
> > > feature (given that other existing stack protections
> > > are enabled), so this is an attempt to be on a proactive
> > > side vs. catching up with existing successful exploits.
> > >
> > > The main idea is that since the stack offset is
> > > randomized upon each system call, it is very hard for
> > > attacker to reliably land in any particular place on
> > > the thread stack when attack is performed.
> > > Also, since randomization is performed *after* pt_regs,
> > > the ptrace-based approach to discover randomization
> > > offset during a long-running syscall should not be
> > > possible.
> > >
> > > [1] jon.oberheide.org/files/infiltrate12-thestackisback.pdf
> > > [2] jon.oberheide.org/files/stackjacking-infiltrate11.pdf
> > > [3] googleprojectzero.blogspot.com/2016/06/exploiting-
> > > recursion-in-linux-kernel_20.html
>
> Now that thread_info is off the stack, and vmap stack guard pages exist,
> it's not clear to me what the benefit is.
Yes, as it says above, this is an attempt to be proactive vs. reactive.
We cannot show concrete attack now that would succeed with vmap
stack enabled, thread_info removed and other protections enabled.
However, the fact that kernel thread stack is still very deterministic
remains, and this feature of it has been utilized many times in attacks.
We don't know where creative attackers would go next and what they
can use to mount next kernel stack-based attack, but I think this is just
a question of time. I don't believe we can claim that currently Linux kernel
thread stack is immune from attacks.
So, if we can add a protection that is not invasive, both on code and performance,
and which might make the attacker's life considerably harder, why not making it?
>
> > > The main issue with this approach is that it slightly breaks the
> > > processing of last frame in the unwinder, so I have made a simple
> > > fix to the frame pointer unwinder (I guess others should be fixed
> > > similarly) and stack dump functionality to "jump" over the random hole
> > > at the end. My way of solving this is probably far from ideal,
> > > so I would really appreciate feedback on how to improve it.
> >
> > That's probably a question for Josh :)
> >
> > Another way to do the dirty work would be to do:
> >
> > char *ptr = alloca(offset);
> > asm volatile ("" :: "m" (*ptr));
> >
> > in do_syscall_64() and adjust compiler flags as needed to avoid warnings. Hmm.
>
> I like the alloca() idea a lot. If you do the stack adjustment in C,
> then everything should just work, with no custom hacks in entry code or
> the unwinders.
Ok, so maybe this is what I am going to try next then.
Best Regards,
Elena.
Powered by blists - more mailing lists