[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20230629221910.359711-1-julian.pidancet@oracle.com>
Date: Fri, 30 Jun 2023 00:19:10 +0200
From: Julian Pidancet <julian.pidancet@...cle.com>
To: Christoph Lameter <cl@...ux.com>,
Pekka Enberg <penberg@...nel.org>,
David Rientjes <rientjes@...gle.com>,
Joonsoo Kim <iamjoonsoo.kim@....com>,
Andrew Morton <akpm@...ux-foundation.org>,
Vlastimil Babka <vbabka@...e.cz>
Cc: Roman Gushchin <roman.gushchin@...ux.dev>,
Hyeonggon Yoo <42.hyeyoo@...il.com>, linux-mm@...ck.org,
Jonathan Corbet <corbet@....net>, linux-doc@...r.kernel.org,
linux-kernel@...r.kernel.org, Matthew Wilcox <willy@...radead.org>,
Kees Cook <keescook@...omium.org>,
Rafael Aquini <aquini@...hat.com>,
Julian Pidancet <julian.pidancet@...cle.com>
Subject: [PATCH v2] mm/slub: disable slab merging in the default configuration
Make CONFIG_SLAB_MERGE_DEFAULT default to n unless CONFIG_SLUB_TINY is
enabled. Benefits of slab merging is limited on systems that are not
memory constrained: the memory overhead is low and evidence of its
effect on cache hotness is hard to come by.
On the other hand, distinguishing allocations into different slabs will
make attacks that rely on "heap spraying" more difficult to carry out
with success.
Take sides with security in the default kernel configuration over
questionnable performance benefits/memory efficiency.
A timed kernel compilation test, on x86 with 4K pages, conducted 10
times with slab_merge, and the same test then conducted with
slab_nomerge on the same hardware in a similar state do not show any
sign of performance hit one way or another:
| slab_merge | slab_nomerge |
------+------------------+------------------|
Time | 588.080 ± 0.799 | 587.308 ± 1.411 |
Min | 586.267 | 584.640 |
Max | 589.248 | 590.091 |
Peaks in slab usage during the test workload reveal a memory overhead
of 2.2 MiB when using slab_nomerge. Slab usage overhead after a fresh boot
amounts to 2.3 MiB:
Slab Usage | slab_merge | slab_nomerge |
-------------------+------------+--------------|
After fresh boot | 79908 kB | 82284 kB |
During test (peak) | 127940 kB | 130204 kB |
Signed-off-by: Julian Pidancet <julian.pidancet@...cle.com>
Reviewed-by: Kees Cook <keescook@...omium.org>
---
v2:
- Re-run benchmark to minimize variance in results due to CPU
frequency scaling.
- Record slab usage after boot and peaks during tests workload.
- Include benchmark results in commit message.
- Fix typo: s/MEGE/MERGE/.
- Specify that "overhead" refers to memory overhead in SLUB doc.
v1:
- Link: https://lore.kernel.org/linux-mm/20230627132131.214475-1-julian.pidancet@oracle.com/
.../admin-guide/kernel-parameters.txt | 29 ++++++++++---------
Documentation/mm/slub.rst | 7 +++--
mm/Kconfig | 6 ++--
3 files changed, 22 insertions(+), 20 deletions(-)
diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
index c5e7bb4babf0..7e78471a96b7 100644
--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -5652,21 +5652,22 @@
slram= [HW,MTD]
- slab_merge [MM]
- Enable merging of slabs with similar size when the
- kernel is built without CONFIG_SLAB_MERGE_DEFAULT.
-
slab_nomerge [MM]
- Disable merging of slabs with similar size. May be
- necessary if there is some reason to distinguish
- allocs to different slabs, especially in hardened
- environments where the risk of heap overflows and
- layout control by attackers can usually be
- frustrated by disabling merging. This will reduce
- most of the exposure of a heap attack to a single
- cache (risks via metadata attacks are mostly
- unchanged). Debug options disable merging on their
- own.
+ Disable merging of slabs with similar size when
+ the kernel is built with CONFIG_SLAB_MERGE_DEFAULT.
+ Allocations of the same size made in distinct
+ caches will be placed in separate slabs. In
+ hardened environment, the risk of heap overflows
+ and layout control by attackers can usually be
+ frustrated by disabling merging.
+
+ slab_merge [MM]
+ Enable merging of slabs with similar size. May be
+ necessary to reduce overhead or increase cache
+ hotness of objects, at the cost of increased
+ exposure in case of a heap attack to a single
+ cache. (risks via metadata attacks are mostly
+ unchanged).
For more information see Documentation/mm/slub.rst.
slab_max_order= [MM, SLAB]
diff --git a/Documentation/mm/slub.rst b/Documentation/mm/slub.rst
index be75971532f5..0e2ce82177c0 100644
--- a/Documentation/mm/slub.rst
+++ b/Documentation/mm/slub.rst
@@ -122,9 +122,10 @@ used on the wrong slab.
Slab merging
============
-If no debug options are specified then SLUB may merge similar slabs together
-in order to reduce overhead and increase cache hotness of objects.
-``slabinfo -a`` displays which slabs were merged together.
+If the kernel is built with ``CONFIG_SLAB_MERGE_DEFAULT`` or if ``slab_merge``
+is specified on the kernel command line, then SLUB may merge similar slabs
+together in order to reduce memory overhead and increase cache hotness of
+objects. ``slabinfo -a`` displays which slabs were merged together.
Slab validation
===============
diff --git a/mm/Kconfig b/mm/Kconfig
index 7672a22647b4..05b0304302d4 100644
--- a/mm/Kconfig
+++ b/mm/Kconfig
@@ -255,7 +255,7 @@ config SLUB_TINY
config SLAB_MERGE_DEFAULT
bool "Allow slab caches to be merged"
- default y
+ default n
depends on SLAB || SLUB
help
For reduced kernel memory fragmentation, slab caches can be
@@ -264,8 +264,8 @@ config SLAB_MERGE_DEFAULT
overwrite objects from merged caches (and more easily control
cache layout), which makes such heap attacks easier to exploit
by attackers. By keeping caches unmerged, these kinds of exploits
- can usually only damage objects in the same cache. To disable
- merging at runtime, "slab_nomerge" can be passed on the kernel
+ can usually only damage objects in the same cache. To enable
+ merging at runtime, "slab_merge" can be passed on the kernel
command line.
config SLAB_FREELIST_RANDOM
--
2.40.1
Powered by blists - more mailing lists