[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <6248429.A4IJrfgOW3@wuerfel>
Date: Tue, 21 Jun 2016 21:47:44 +0200
From: Arnd Bergmann <arnd@...db.de>
To: Kees Cook <keescook@...omium.org>
Cc: Andy Lutomirski <luto@...nel.org>,
"x86@...nel.org" <x86@...nel.org>,
LKML <linux-kernel@...r.kernel.org>,
linux-arch <linux-arch@...r.kernel.org>,
Borislav Petkov <bp@...en8.de>,
Nadav Amit <nadav.amit@...il.com>,
Brian Gerst <brgerst@...il.com>,
"kernel-hardening@...ts.openwall.com"
<kernel-hardening@...ts.openwall.com>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Josh Poimboeuf <jpoimboe@...hat.com>,
Jann Horn <jann@...jh.net>,
Heiko Carstens <heiko.carstens@...ibm.com>
Subject: Re: [PATCH v3 00/13] Virtually mapped stacks with guard pages (x86, core)
On Tuesday, June 21, 2016 10:16:21 AM CEST Kees Cook wrote:
> On Tue, Jun 21, 2016 at 2:24 AM, Arnd Bergmann <arnd@...db.de> wrote:
> > On Monday, June 20, 2016 4:43:30 PM CEST Andy Lutomirski wrote:
> >>
> >> On my laptop, this adds about 1.5µs of overhead to task creation,
> >> which seems to be mainly caused by vmalloc inefficiently allocating
> >> individual pages even when a higher-order page is available on the
> >> freelist.
> >
> > Would it help to have a fixed virtual address for the stack instead
> > and map the current stack to that during a task switch, similar to
> > how we handle fixmap pages?
> >
> > That would of course trade the allocation overhead for a task switch
> > overhead, which may be better or worse. It would also give "current"
> > a constant address, which may give a small performance advantage
> > but may also introduce a new attack vector unless we randomize it
> > again.
>
> Right: we don't want a fixed address. That makes attacks WAY easier.
Do we care about making the address more random then? When I look
at /proc/vmallocinfo, I see that allocations are all using
consecutive addresses, so if you can figure out the virtual
address of the stack for one process that would give you a good
chance of guessing the address for the next pid.
Arnd
Powered by blists - more mailing lists