[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230915105933.495735-1-matteorizzo@google.com>
Date: Fri, 15 Sep 2023 10:59:19 +0000
From: Matteo Rizzo <matteorizzo@...gle.com>
To: cl@...ux.com, penberg@...nel.org, rientjes@...gle.com,
iamjoonsoo.kim@....com, akpm@...ux-foundation.org, vbabka@...e.cz,
roman.gushchin@...ux.dev, 42.hyeyoo@...il.com,
keescook@...omium.org, linux-kernel@...r.kernel.org,
linux-doc@...r.kernel.org, linux-mm@...ck.org,
linux-hardening@...r.kernel.org, tglx@...utronix.de,
mingo@...hat.com, bp@...en8.de, dave.hansen@...ux.intel.com,
x86@...nel.org, hpa@...or.com, corbet@....net, luto@...nel.org,
peterz@...radead.org
Cc: jannh@...gle.com, matteorizzo@...gle.com, evn@...gle.com,
poprdi@...gle.com, jordyzomer@...gle.com
Subject: [RFC PATCH 00/14] Prevent cross-cache attacks in the SLUB allocator
The goal of this patch series is to deterministically prevent cross-cache
attacks in the SLUB allocator.
Use-after-free bugs are normally exploited by making the memory allocator
reuse the victim object's memory for an object with a different type. This
creates a type confusion which is a very powerful attack primitive.
There are generally two ways to create such type confusions in the kernel:
one way is to make SLUB reuse the freed object's address for another object
of a different type which lives in the same slab cache. This only works in
slab caches that can contain objects of different types (i.e. the kmalloc
caches) and the attacker is limited to objects that belong to the same size
class as the victim object.
The other way is to use a "cross-cache attack": make SLUB return the page
containing the victim object to the page allocator and then make it use the
same page for a different slab cache or other objects that contain
attacker-controlled data. This gives attackers access to all objects rather
than just the ones in the same size class as the target and lets attackers
target objects allocated from dedicated caches such as struct file.
This patch prevents cross-cache attacks by making sure that once a virtual
address is used for a slab cache it's never reused for anything except for
other slabs in that cache.
Jann Horn (13):
mm/slub: add is_slab_addr/is_slab_page helpers
mm/slub: move kmem_cache_order_objects to slab.h
mm: use virt_to_slab instead of folio_slab
mm/slub: create folio_set/clear_slab helpers
mm/slub: pass additional args to alloc_slab_page
mm/slub: pass slab pointer to the freeptr decode helper
security: introduce CONFIG_SLAB_VIRTUAL
mm/slub: add the slab freelists to kmem_cache
x86: Create virtual memory region for SLUB
mm/slub: allocate slabs from virtual memory
mm/slub: introduce the deallocated_pages sysfs attribute
mm/slub: sanity-check freepointers
security: add documentation for SLAB_VIRTUAL
Matteo Rizzo (1):
mm/slub: don't try to dereference invalid freepointers
Documentation/arch/x86/x86_64/mm.rst | 4 +-
Documentation/security/self-protection.rst | 102 ++++
arch/x86/include/asm/page_64.h | 10 +
arch/x86/include/asm/pgtable_64_types.h | 21 +
arch/x86/mm/init_64.c | 19 +-
arch/x86/mm/kaslr.c | 9 +
arch/x86/mm/mm_internal.h | 4 +
arch/x86/mm/physaddr.c | 10 +
include/linux/slab.h | 8 +
include/linux/slub_def.h | 25 +-
init/main.c | 1 +
kernel/resource.c | 2 +-
lib/slub_kunit.c | 4 +
mm/memcontrol.c | 2 +-
mm/slab.h | 145 +++++
mm/slab_common.c | 21 +-
mm/slub.c | 641 +++++++++++++++++++--
mm/usercopy.c | 12 +-
security/Kconfig.hardening | 16 +
19 files changed, 977 insertions(+), 79 deletions(-)
base-commit: 46a9ea6681907a3be6b6b0d43776dccc62cad6cf
--
2.42.0.459.ge4e396fd5e-goog
Powered by blists - more mailing lists