[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20231120091214.150502-3-sxwjean@me.com>
Date: Mon, 20 Nov 2023 17:12:12 +0800
From: sxwjean@...com
To: cl@...ux.com, penberg@...nel.org, rientjes@...gle.com,
iamjoonsoo.kim@....com, vbabka@...e.cz, roman.gushchin@...ux.dev,
42.hyeyoo@...il.com
Cc: corbet@....net, linux-mm@...ck.org, linux-doc@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: [PATCH 2/4] mm/slab: remove slab_nomrege and slab_merge
From: Xiongwei Song <xiongwei.song@...driver.com>
Since slab allocatoer has already been removed, so we should also remove
the related parameters. And change the global flag from slab_nomerge
to slub_nomerge.
Signed-off-by: Xiongwei Song <xiongwei.song@...driver.com>
---
Documentation/admin-guide/kernel-parameters.txt | 11 ++---------
mm/Kconfig | 2 +-
mm/slab_common.c | 13 +++++--------
3 files changed, 8 insertions(+), 18 deletions(-)
diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
index c7709a11f8ce..afca9ff7c9f0 100644
--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -5870,11 +5870,11 @@
slram= [HW,MTD]
- slab_merge [MM]
+ slub_merge [MM]
Enable merging of slabs with similar size when the
kernel is built without CONFIG_SLAB_MERGE_DEFAULT.
- slab_nomerge [MM]
+ slub_nomerge [MM]
Disable merging of slabs with similar size. May be
necessary if there is some reason to distinguish
allocs to different slabs, especially in hardened
@@ -5915,13 +5915,6 @@
lower than slub_max_order.
For more information see Documentation/mm/slub.rst.
- slub_merge [MM, SLUB]
- Same with slab_merge.
-
- slub_nomerge [MM, SLUB]
- Same with slab_nomerge. This is supported for legacy.
- See slab_nomerge for more information.
-
smart2= [HW]
Format: <io1>[,<io2>[,...,<io8>]]
diff --git a/mm/Kconfig b/mm/Kconfig
index 766aa8f8e553..87c3f2e1d0d3 100644
--- a/mm/Kconfig
+++ b/mm/Kconfig
@@ -255,7 +255,7 @@ config SLAB_MERGE_DEFAULT
cache layout), which makes such heap attacks easier to exploit
by attackers. By keeping caches unmerged, these kinds of exploits
can usually only damage objects in the same cache. To disable
- merging at runtime, "slab_nomerge" can be passed on the kernel
+ merging at runtime, "slub_nomerge" can be passed on the kernel
command line.
config SLAB_FREELIST_RANDOM
diff --git a/mm/slab_common.c b/mm/slab_common.c
index 238293b1dbe1..d707abd31926 100644
--- a/mm/slab_common.c
+++ b/mm/slab_common.c
@@ -58,26 +58,23 @@ static DECLARE_WORK(slab_caches_to_rcu_destroy_work,
/*
* Merge control. If this is set then no merging of slab caches will occur.
*/
-static bool slab_nomerge = !IS_ENABLED(CONFIG_SLAB_MERGE_DEFAULT);
+static bool slub_nomerge = !IS_ENABLED(CONFIG_SLAB_MERGE_DEFAULT);
static int __init setup_slab_nomerge(char *str)
{
- slab_nomerge = true;
+ slub_nomerge = true;
return 1;
}
static int __init setup_slab_merge(char *str)
{
- slab_nomerge = false;
+ slub_nomerge = false;
return 1;
}
__setup_param("slub_nomerge", slub_nomerge, setup_slab_nomerge, 0);
__setup_param("slub_merge", slub_merge, setup_slab_merge, 0);
-__setup("slab_nomerge", setup_slab_nomerge);
-__setup("slab_merge", setup_slab_merge);
-
/*
* Determine the size of a slab object
*/
@@ -138,7 +135,7 @@ static unsigned int calculate_alignment(slab_flags_t flags,
*/
int slab_unmergeable(struct kmem_cache *s)
{
- if (slab_nomerge || (s->flags & SLAB_NEVER_MERGE))
+ if (slub_nomerge || (s->flags & SLAB_NEVER_MERGE))
return 1;
if (s->ctor)
@@ -163,7 +160,7 @@ struct kmem_cache *find_mergeable(unsigned int size, unsigned int align,
{
struct kmem_cache *s;
- if (slab_nomerge)
+ if (slub_nomerge)
return NULL;
if (ctor)
--
2.34.1
Powered by blists - more mailing lists