lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  PHC 
Open Source and information security mailing list archives
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 06 Oct 2014 19:53:54 +0400
From:	Andrey Ryabinin <>
Cc:	Andrey Ryabinin <>,
	Dmitry Vyukov <>,
	Konstantin Serebryany <>,
	Dmitry Chernenkov <>,
	Andrey Konovalov <>,
	Yuri Gribov <>,
	Konstantin Khlebnikov <>,
	Sasha Levin <>,
	Michal Marek <>,
	Thomas Gleixner <>,
	Ingo Molnar <>,
	Christoph Lameter <>,
	Pekka Enberg <>,
	David Rientjes <>,
	Joonsoo Kim <>,
	Andrew Morton <>,
	Dave Hansen <>,
	Andi Kleen <>,
	Vegard Nossum <>,
	"H. Peter Anvin" <>,,, Randy Dunlap <>,
	Peter Zijlstra <>,
	Alexander Viro <>,
	Dave Jones <>
Subject: [PATCH v4 00/13] Kernel address sanitizer - runtime memory debugger.

KASan is a runtime memory debugger designed to find use-after-free
and out-of-bounds bugs.

Currently KASAN supported only for x86_64 architecture and requires kernel
to be build with SLUB allocator.
KASAN uses compile-time instrumentation for checking every memory access, therefore you
will need a fresh GCC >= v5.0.0.

Patches are based on motm-2014-10-02-16-22 tree and also avaliable in git:

	git:// --branch=kasan/kasan_v4

Changes since v3:

    - Rebased on top of the motm-2014-10-02-16-22.
    - Added comment explaining why rcu slabs are not poisoned in kasan_slab_free().
    - Removed 'Do not use slub poisoning with KASan because poisoning
       overwrites user-tracking info' paragraph from Documentation/kasan.txt
       cause this is absolutely wrong. Poisoning overwrites only object's data
       and doesn't touch metadata, so it works fine with KASan.

    - Removed useless kasan_free_slab_pages().
    - Fixed kasan_mark_slab_padding(). In v3 kasan_mark_slab_padding could
        left some memory unpoisoned.

    - Removed __asan_init_v*() stub. GCC doesn't generate this call anymore:

    - Replaced CALL_KASAN_REPORT define with inline function
        (patch "kasan: introduce inline instrumentation")

Changes since v2:

    - Shadow moved to vmalloc area.
    - Added posion page. This page mapped to shadow correspondig to
      shadow region itself:
       [kasan_mem_to_shadow(KASAN_SHADOW_START) - kasan_mem_to_shadow(KASAN_SHADOW_END)]
      It used to catch memory access to shadow outside mm/kasan/.

    - Fixed boot with CONFIG_DEBUG_VIRTUAL=y
    - Fixed boot with KASan and stack protector enabled
         (patch "x86_64: load_percpu_segment: read irq_stack_union.gs_base before load_segment")

    - Fixed build with CONFIG_EFI_STUB=y
    - Some slub specific stuf moved from mm/slab.h to include/linux/slub_def.h
    - Fixed Kconfig dependency. CONFIG_KASAN depends on CONFIG_SLUB_DEBUG.
    - Optimizations of __asan_load/__asan_store.
    - Spelling fixes from Randy.
    - Misc minor cleanups in different places.

    - Added inline instrumentation in last patch. This will require two not
         yet-in-trunk-patches for GCC:

Changes since v1:

    - The main change is in shadow memory laoyut.
      Now for shadow memory we reserve 1/8 of all virtual addresses available for kernel.
      16TB on x86_64 to cover all 128TB of kernel's address space.
      At early stage we map whole shadow region with zero page.
      Latter, after physical memory mapped to direct mapping address range
      we unmap zero pages from corresponding shadow and allocate and map a real

     - Since per-arch work is much bigger now, support for arm/x86_32 platforms was dropped.

     - CFLAGS was change from -fsanitize=address with different --params to -fsanitize=kernel-address

     - If compiler doesn't support -fsanitize=kernel-address warning printed and build continues without -fsanitize

     - Removed kasan_memset/kasan_memcpy/kasan_memmove hooks. It turned out that this hooks are not needed. Compiler
       already instrument memset/memcpy/memmove (inserts __asan_load/__asan_store call before mem*() calls).

     - branch profiling disabled for mm/kasan/kasan.c to avoid recursion (__asan_load -> ftrace_likely_update -> __asan_load -> ...)

     - kasan hooks for buddy allocator moved to right places

Comparison with other debuggin features:

	- KASan can do almost everything that kmemcheck can. KASan uses compile-time
	  instrumentation, which makes it significantly faster than kmemcheck.
	  The only advantage of kmemcheck over KASan is detection of unitialized
	  memory reads.

	- KASan is slower than DEBUG_PAGEALLOC, but KASan works on sub-page
	  granularity level, so it able to find more bugs.

SLUB_DEBUG (poisoning, redzones):
	- SLUB_DEBUG has lower overhead than KASan.

	- SLUB_DEBUG in most cases are not able to detect bad reads,
	  KASan able to detect both reads and writes.

	- In some cases (e.g. redzone overwritten) SLUB_DEBUG detect
	  bugs only on allocation/freeing of object. KASan catch
	  bugs right before it will happen, so we always know exact
	  place of first bad read/write.

Basic idea:

    The main idea of KASAN is to use shadow memory to record whether each byte of memory
    is safe to access or not, and use compiler's instrumentation to check the shadow memory
    on each memory access.

    Address sanitizer uses 1/8 of the memory addressable in kernel for shadow memory
    (on x86_64 16TB of virtual address space reserved for shadow to cover all 128TB)
    and uses direct mapping with a scale and offset to translate a memory
    address to its corresponding shadow address.

    Here is function to translate address to corresponding shadow address:

         unsigned long kasan_mem_to_shadow(unsigned long addr)
                    return (addr >> KASAN_SHADOW_SCALE_SHIFT) + KASAN_SHADOW_OFFSET;

    So for every 8 bytes there is one corresponding byte of shadow memory.
    The following encoding used for each shadow byte: 0 means that all 8 bytes of the
    corresponding memory region are valid for access; k (1 <= k <= 7) means that
    the first k bytes are valid for access, and other (8 - k) bytes are not;
    Any negative value indicates that the entire 8-bytes are inaccessible.
    Different negative values used to distinguish between different kinds of
    inaccessible memory (redzones, freed memory) (see mm/kasan/kasan.h).

    To be able to detect accesses to bad memory we need a special compiler.
    Such compiler inserts a specific function calls (__asan_load*(addr), __asan_store*(addr))
    before each memory access of size 1, 2, 4, 8 or 16.

    These functions check whether memory region is valid to access or not by checking
    corresponding shadow memory. If access is not valid an error printed.

Andrey Ryabinin (13):
  Add kernel address sanitizer infrastructure.
  efi: libstub: disable KASAN for efistub
  x86_64: load_percpu_segment: read irq_stack_union.gs_base before
  x86_64: add KASan support
  mm: page_alloc: add kasan hooks on alloc and free paths
  mm: slub: introduce virt_to_obj function.
  mm: slub: share slab_err and object_err functions
  mm: slub: introduce metadata_access_enable()/metadata_access_disable()
  mm: slub: add kernel address sanitizer support for slub allocator
  fs: dcache: manually unpoison dname after allocation to shut up
    kasan's reports
  kmemleak: disable kasan instrumentation for kmemleak
  lib: add kasan test module
  kasan: introduce inline instrumentation

 Documentation/kasan.txt               | 174 ++++++++++++++
 Makefile                              |  15 +-
 arch/x86/Kconfig                      |   1 +
 arch/x86/boot/Makefile                |   2 +
 arch/x86/boot/compressed/Makefile     |   2 +
 arch/x86/include/asm/kasan.h          |  27 +++
 arch/x86/kernel/Makefile              |   2 +
 arch/x86/kernel/cpu/common.c          |   4 +-
 arch/x86/kernel/dumpstack.c           |   5 +-
 arch/x86/kernel/head64.c              |   9 +-
 arch/x86/kernel/head_64.S             |  28 +++
 arch/x86/mm/Makefile                  |   3 +
 arch/x86/mm/init.c                    |   3 +
 arch/x86/mm/kasan_init_64.c           |  87 +++++++
 arch/x86/realmode/Makefile            |   2 +-
 arch/x86/realmode/rm/Makefile         |   1 +
 arch/x86/vdso/Makefile                |   1 +
 drivers/firmware/efi/libstub/Makefile |   1 +
 fs/dcache.c                           |   5 +
 include/linux/kasan.h                 |  69 ++++++
 include/linux/sched.h                 |   3 +
 include/linux/slab.h                  |  11 +-
 include/linux/slub_def.h              |   9 +
 lib/Kconfig.debug                     |   2 +
 lib/Kconfig.kasan                     |  54 +++++
 lib/Makefile                          |   1 +
 lib/test_kasan.c                      | 254 ++++++++++++++++++++
 mm/Makefile                           |   4 +
 mm/compaction.c                       |   2 +
 mm/kasan/Makefile                     |   3 +
 mm/kasan/kasan.c                      | 430 ++++++++++++++++++++++++++++++++++
 mm/kasan/kasan.h                      |  54 +++++
 mm/kasan/report.c                     | 238 +++++++++++++++++++
 mm/kmemleak.c                         |   6 +
 mm/page_alloc.c                       |   3 +
 mm/slab_common.c                      |   5 +-
 mm/slub.c                             |  55 ++++-
 scripts/Makefile.lib                  |  10 +
 38 files changed, 1570 insertions(+), 15 deletions(-)
 create mode 100644 Documentation/kasan.txt
 create mode 100644 arch/x86/include/asm/kasan.h
 create mode 100644 arch/x86/mm/kasan_init_64.c
 create mode 100644 include/linux/kasan.h
 create mode 100644 lib/Kconfig.kasan
 create mode 100644 lib/test_kasan.c
 create mode 100644 mm/kasan/Makefile
 create mode 100644 mm/kasan/kasan.c
 create mode 100644 mm/kasan/kasan.h
 create mode 100644 mm/kasan/report.c

Cc: Dmitry Vyukov <>
Cc: Konstantin Serebryany <>
Cc: Dmitry Chernenkov <>
Cc: Andrey Konovalov <>
Cc: Yuri Gribov <>
Cc: Konstantin Khlebnikov <>
Cc: Sasha Levin <>
Cc: Michal Marek <>
Cc: Thomas Gleixner <>
Cc: Ingo Molnar <>
Cc: Christoph Lameter <>
Cc: Pekka Enberg <>
Cc: David Rientjes <>
Cc: Joonsoo Kim <>
Cc: Andrew Morton <>
Cc: Dave Hansen <>
Cc: Andi Kleen <>
Cc: Vegard Nossum <>
Cc: H. Peter Anvin <>
Cc: <>
Cc: <>
Cc: Randy Dunlap <>
Cc: Michal Marek <>
Cc: Ingo Molnar <>
Cc: Peter Zijlstra <>
Cc: Alexander Viro <>
Cc: Dave Jones <>


To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to
More majordomo info at
Please read the FAQ at

Powered by blists - more mailing lists