[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20220410162511.656541-1-ohkwon1043@gmail.com>
Date: Mon, 11 Apr 2022 01:25:11 +0900
From: Ohhoon Kwon <ohkwon1043@...il.com>
To: Hyeonggon Yoo <42.hyeyoo@...il.com>,
Christoph Lameter <cl@...ux.com>,
Pekka Enberg <penberg@...nel.org>,
David Rientjes <rientjes@...gle.com>,
Joonsoo Kim <iamjoonsoo.kim@....com>,
Andrew Morton <akpm@...ux-foundation.org>,
Vlastimil Babka <vbabka@...e.cz>,
Roman Gushchin <roman.gushchin@...ux.dev>
Cc: Ohhoon Kwon <ohkwon1043@...il.com>,
JaeSang Yoo <jsyoo5b@...il.com>,
Wonhyuk Yang <vvghjk1234@...il.com>,
Jiyoup Kim <lakroforce@...il.com>,
Donghyeok Kim <dthex5d@...il.com>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Subject: [PATCH v2] mm/slab_common: move dma-kmalloc caches creation into new_kmalloc_cache()
There are four types of kmalloc_caches: KMALLOC_NORMAL, KMALLOC_CGROUP,
KMALLOC_RECLAIM, and KMALLOC_DMA. While the first three types are
created using new_kmalloc_cache(), KMALLOC_DMA caches are created in a
separate logic. Let KMALLOC_DMA caches be also created using
new_kmalloc_cache(), to enhance readability.
Historically, there were only KMALLOC_NORMAL caches and KMALLOC_DMA
caches in the first place, and they were initialized in two separate
logics. However, when KMALLOC_RECLAIM was introduced in v4.20 via
commit 1291523f2c1d ("mm, slab/slub: introduce kmalloc-reclaimable
caches") and KMALLOC_CGROUP was introduced in v5.14 via
commit 494c1dfe855e ("mm: memcg/slab: create a new set of kmalloc-cg-<n>
caches"), their creations were merged with KMALLOC_NORMAL's only.
KMALLOC_DMA creation logic should be merged with them, too.
By merging KMALLOC_DMA initialization with other types, the following
two changes might occur:
1. The order dma-kmalloc-<n> caches added in slab_cache list may be
sorted by size. i.e. the order they appear in /proc/slabinfo may change
as well.
2. slab_state will be set to UP after KMALLOC_DMA is created.
In case of slub, freelist randomization is dependent on slab_state>=UP,
and therefore KMALLOC_DMA cache's freelist will not be randomized in
creation, but will be deferred to init_freelist_randomization().
Co-developed-by: JaeSang Yoo <jsyoo5b@...il.com>
Signed-off-by: JaeSang Yoo <jsyoo5b@...il.com>
Signed-off-by: Ohhoon Kwon <ohkwon1043@...il.com>
---
mm/slab_common.c | 18 +++---------------
1 file changed, 3 insertions(+), 15 deletions(-)
diff --git a/mm/slab_common.c b/mm/slab_common.c
index 6ee64d6208b3..a959d247c27b 100644
--- a/mm/slab_common.c
+++ b/mm/slab_common.c
@@ -849,6 +849,8 @@ new_kmalloc_cache(int idx, enum kmalloc_cache_type type, slab_flags_t flags)
return;
}
flags |= SLAB_ACCOUNT;
+ } else if (IS_ENABLED(CONFIG_ZONE_DMA) && (type == KMALLOC_DMA)) {
+ flags |= SLAB_CACHE_DMA;
}
kmalloc_caches[type][idx] = create_kmalloc_cache(
@@ -877,7 +879,7 @@ void __init create_kmalloc_caches(slab_flags_t flags)
/*
* Including KMALLOC_CGROUP if CONFIG_MEMCG_KMEM defined
*/
- for (type = KMALLOC_NORMAL; type <= KMALLOC_RECLAIM; type++) {
+ for (type = KMALLOC_NORMAL; type < NR_KMALLOC_TYPES; type++) {
for (i = KMALLOC_SHIFT_LOW; i <= KMALLOC_SHIFT_HIGH; i++) {
if (!kmalloc_caches[type][i])
new_kmalloc_cache(i, type, flags);
@@ -898,20 +900,6 @@ void __init create_kmalloc_caches(slab_flags_t flags)
/* Kmalloc array is now usable */
slab_state = UP;
-
-#ifdef CONFIG_ZONE_DMA
- for (i = 0; i <= KMALLOC_SHIFT_HIGH; i++) {
- struct kmem_cache *s = kmalloc_caches[KMALLOC_NORMAL][i];
-
- if (s) {
- kmalloc_caches[KMALLOC_DMA][i] = create_kmalloc_cache(
- kmalloc_info[i].name[KMALLOC_DMA],
- kmalloc_info[i].size,
- SLAB_CACHE_DMA | flags, 0,
- kmalloc_info[i].size);
- }
- }
-#endif
}
#endif /* !CONFIG_SLOB */
--
2.25.1
Powered by blists - more mailing lists