lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 22 Jun 2020 22:07:37 +0200
From:   Jann Horn <jannh@...gle.com>
To:     Kees Cook <keescook@...omium.org>
Cc:     Thomas Gleixner <tglx@...utronix.de>,
        Elena Reshetova <elena.reshetova@...el.com>,
        "the arch/x86 maintainers" <x86@...nel.org>,
        Andy Lutomirski <luto@...nel.org>,
        Peter Zijlstra <peterz@...radead.org>,
        Catalin Marinas <catalin.marinas@....com>,
        Will Deacon <will@...nel.org>,
        Mark Rutland <mark.rutland@....com>,
        Alexander Potapenko <glider@...gle.com>,
        Alexander Popov <alex.popov@...ux.com>,
        Ard Biesheuvel <ard.biesheuvel@...aro.org>,
        Kernel Hardening <kernel-hardening@...ts.openwall.com>,
        Linux ARM <linux-arm-kernel@...ts.infradead.org>,
        Linux-MM <linux-mm@...ck.org>,
        kernel list <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v4 3/5] stack: Optionally randomize kernel stack offset
 each syscall

On Mon, Jun 22, 2020 at 9:31 PM Kees Cook <keescook@...omium.org> wrote:
> This provides the ability for architectures to enable kernel stack base
> address offset randomization. This feature is controlled by the boot
> param "randomize_kstack_offset=on/off", with its default value set by
> CONFIG_RANDOMIZE_KSTACK_OFFSET_DEFAULT.
[...]
> +#define add_random_kstack_offset() do {                                        \
> +       if (static_branch_maybe(CONFIG_RANDOMIZE_KSTACK_OFFSET_DEFAULT, \
> +                               &randomize_kstack_offset)) {            \
> +               u32 offset = this_cpu_read(kstack_offset);              \
> +               u8 *ptr = __builtin_alloca(offset & 0x3FF);             \
> +               asm volatile("" : "=m"(*ptr));                          \
> +       }                                                               \
> +} while (0)

clang generates better code here if the mask is stack-aligned -
otherwise it needs to round the stack pointer / the offset:

$ cat alloca_align.c
#include <alloca.h>
void callee(void);

void alloca_blah(unsigned long rand) {
  asm volatile(""::"r"(alloca(rand & MASK)));
  callee();
}
$ clang -O3 -c -o alloca_align.o alloca_align.c -DMASK=0x3ff
$ objdump -d alloca_align.o
[...]
   0: 55                    push   %rbp
   1: 48 89 e5              mov    %rsp,%rbp
   4: 81 e7 ff 03 00 00    and    $0x3ff,%edi
   a: 83 c7 0f              add    $0xf,%edi
   d: 83 e7 f0              and    $0xfffffff0,%edi
  10: 48 89 e0              mov    %rsp,%rax
  13: 48 29 f8              sub    %rdi,%rax
  16: 48 89 c4              mov    %rax,%rsp
  19: e8 00 00 00 00        callq  1e <alloca_blah+0x1e>
  1e: 48 89 ec              mov    %rbp,%rsp
  21: 5d                    pop    %rbp
  22: c3                    retq
$ clang -O3 -c -o alloca_align.o alloca_align.c -DMASK=0x3f0
$ objdump -d alloca_align.o
[...]
   0: 55                    push   %rbp
   1: 48 89 e5              mov    %rsp,%rbp
   4: 48 89 e0              mov    %rsp,%rax
   7: 81 e7 f0 03 00 00    and    $0x3f0,%edi
   d: 48 29 f8              sub    %rdi,%rax
  10: 48 89 c4              mov    %rax,%rsp
  13: e8 00 00 00 00        callq  18 <alloca_blah+0x18>
  18: 48 89 ec              mov    %rbp,%rsp
  1b: 5d                    pop    %rbp
  1c: c3                    retq
$

(From a glance at the assembly, gcc seems to always assume that the
length may be misaligned.)

Maybe this should be something along the lines of
__builtin_alloca(offset & (0x3ff & ARCH_STACK_ALIGN_MASK)) (with
appropriate definitions of the stack alignment mask depending on the
architecture's choice of stack alignment for kernel code).

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ