[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20240304184252.work.496-kees@kernel.org>
Date: Mon, 4 Mar 2024 10:49:28 -0800
From: Kees Cook <keescook@...omium.org>
To: Vlastimil Babka <vbabka@...e.cz>
Cc: Kees Cook <keescook@...omium.org>,
"GONG, Ruiqi" <gongruiqi@...weicloud.com>,
Xiu Jianfeng <xiujianfeng@...wei.com>,
Suren Baghdasaryan <surenb@...gle.com>,
Kent Overstreet <kent.overstreet@...ux.dev>,
Andrew Morton <akpm@...ux-foundation.org>,
Christoph Lameter <cl@...ux.com>,
Pekka Enberg <penberg@...nel.org>,
David Rientjes <rientjes@...gle.com>,
Joonsoo Kim <iamjoonsoo.kim@....com>,
Roman Gushchin <roman.gushchin@...ux.dev>,
Hyeonggon Yoo <42.hyeyoo@...il.com>,
Christian Brauner <brauner@...nel.org>,
Al Viro <viro@...iv.linux.org.uk>,
Jan Kara <jack@...e.cz>,
linux-kernel@...r.kernel.org,
linux-mm@...ck.org,
linux-fsdevel@...r.kernel.org,
linux-hardening@...r.kernel.org
Subject: [PATCH 0/4] slab: Introduce dedicated bucket allocator
Hi,
Repeating the commit logs for patch 1 here:
Dedicated caches are available For fixed size allocations via
kmem_cache_alloc(), but for dynamically sized allocations there is only
the global kmalloc API's set of buckets available. This means it isn't
possible to separate specific sets of dynamically sized allocations into
a separate collection of caches.
This leads to a use-after-free exploitation weakness in the Linux
kernel since many heap memory spraying/grooming attacks depend on using
userspace-controllable dynamically sized allocations to collide with
fixed size allocations that end up in same cache.
While CONFIG_RANDOM_KMALLOC_CACHES provides a probabilistic defense
against these kinds of "type confusion" attacks, including for fixed
same-size heap objects, we can create a complementary deterministic
defense for dynamically sized allocations.
In order to isolate user-controllable sized allocations from system
allocations, introduce kmem_buckets_create() and kmem_buckets_alloc(),
which behave like kmem_cache_create() and like kmem_cache_alloc() for
confining allocations to a dedicated set of sized caches (which have
the same layout as the kmalloc caches).
This can also be used in the future once codetag allocation annotations
exist to implement per-caller allocation cache isolation[0] even for
dynamic allocations.
Link: https://lore.kernel.org/lkml/202402211449.401382D2AF@keescook [0]
After the implemetation are 3 example patch of how this could be used
for some repeat "offenders" that get used in exploits. There are more to
be isolated beyond just these. Repeating the commit log for patch 2 here:
The msg subsystem is a common target for exploiting[1][2][3][4][5][6]
use-after-free type confusion flaws in the kernel for both read and
write primitives. Avoid having a user-controlled size cache share the
global kmalloc allocator by using a separate set of kmalloc buckets.
After a fresh boot under Ubuntu 23.10, we can see the caches are already
in use:
# grep ^msg_msg /proc/slabinfo
msg_msg-8k 0 0 8192 4 8 : ...
msg_msg-4k 96 128 4096 8 8 : ...
msg_msg-2k 64 64 2048 16 8 : ...
msg_msg-1k 64 64 1024 16 4 : ...
msg_msg-16 1024 1024 16 256 1 : ...
msg_msg-8 0 0 8 512 1 : ...
Link: https://blog.hacktivesecurity.com/index.php/2022/06/13/linux-kernel-exploit-development-1day-case-study/ [1]
Link: https://hardenedvault.net/blog/2022-11-13-msg_msg-recon-mitigation-ved/ [2]
Link: https://www.willsroot.io/2021/08/corctf-2021-fire-of-salvation-writeup.html [3]
Link: https://a13xp0p0v.github.io/2021/02/09/CVE-2021-26708.html [4]
Link: https://google.github.io/security-research/pocs/linux/cve-2021-22555/writeup.html [5]
Link: https://zplin.me/papers/ELOISE.pdf [6]
-Kees
Kees Cook (4):
slab: Introduce dedicated bucket allocator
ipc, msg: Use dedicated slab buckets for alloc_msg()
xattr: Use dedicated slab buckets for setxattr()
mm/util: Use dedicated slab buckets for memdup_user()
fs/xattr.c | 12 ++++++++-
include/linux/slab.h | 26 ++++++++++++++++++
ipc/msgutil.c | 11 +++++++-
mm/slab_common.c | 64 ++++++++++++++++++++++++++++++++++++++++++++
mm/util.c | 12 ++++++++-
5 files changed, 122 insertions(+), 3 deletions(-)
--
2.34.1
Powered by blists - more mailing lists