[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <202003301122.354B722@keescook>
Date: Mon, 30 Mar 2020 11:27:19 -0700
From: Kees Cook <keescook@...omium.org>
To: Mark Rutland <mark.rutland@....com>
Cc: Thomas Gleixner <tglx@...utronix.de>,
Elena Reshetova <elena.reshetova@...el.com>, x86@...nel.org,
Andy Lutomirski <luto@...nel.org>,
Peter Zijlstra <peterz@...radead.org>,
Catalin Marinas <catalin.marinas@....com>,
Will Deacon <will@...nel.org>,
Alexander Potapenko <glider@...gle.com>,
Ard Biesheuvel <ard.biesheuvel@...aro.org>,
Jann Horn <jannh@...gle.com>,
"Perla, Enrico" <enrico.perla@...el.com>,
kernel-hardening@...ts.openwall.com,
linux-arm-kernel@...ts.infradead.org, linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2 3/5] stack: Optionally randomize kernel stack offset
each syscall
On Mon, Mar 30, 2020 at 12:25:36PM +0100, Mark Rutland wrote:
> On Tue, Mar 24, 2020 at 01:32:29PM -0700, Kees Cook wrote:
> > +/*
> > + * Do not use this anywhere else in the kernel. This is used here because
> > + * it provides an arch-agnostic way to grow the stack with correct
> > + * alignment. Also, since this use is being explicitly masked to a max of
> > + * 10 bits, stack-clash style attacks are unlikely. For more details see
> > + * "VLAs" in Documentation/process/deprecated.rst
> > + */
> > +void *__builtin_alloca(size_t size);
> > +
> > +#define add_random_kstack_offset() do { \
> > + if (static_branch_maybe(CONFIG_RANDOMIZE_KSTACK_OFFSET_DEFAULT, \
> > + &randomize_kstack_offset)) { \
> > + u32 offset = this_cpu_read(kstack_offset); \
> > + char *ptr = __builtin_alloca(offset & 0x3FF); \
> > + asm volatile("" : "=m"(*ptr)); \
>
> Is this asm() a homebrew OPTIMIZER_HIDE_VAR(*ptr)? If the asm
> constraints generate metter code, could we add those as alternative
> constraints in OPTIMIZER_HIDE_VAR() ?
Er, no, sorry, not the same. I disassembled the wrong binary. :)
With asm volatile("" : "=m"(*ptr))
ffffffff810038bc: 48 8d 44 24 0f lea 0xf(%rsp),%rax
ffffffff810038c1: 48 83 e0 f0 and $0xfffffffffffffff0,%rax
With __asm__ ("" : "=r" (var) : "0" (var))
ffffffff810038bc: 48 8d 54 24 0f lea 0xf(%rsp),%rdx
ffffffff810038c1: 48 83 e2 f0 and $0xfffffffffffffff0,%rdx
ffffffff810038c5: 0f b6 02 movzbl (%rdx),%eax
ffffffff810038c8: 88 02 mov %al,(%rdx)
It looks like OPTIMIZER_HIDE_VAR() is basically just:
var = var;
In the former case, we avoid the write and retain the allocation. So I
think don't think OPTIMIZER_HIDE_VAR() should be used here, nor should
OPTIMIZER_HIDE_VAR() be changed to remove the "0" (var) bit.
--
Kees Cook
Powered by blists - more mailing lists