[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200810080758.940-1-wuyun.wu@huawei.com>
Date: Mon, 10 Aug 2020 16:07:55 +0800
From: <wuyun.wu@...wei.com>
To: Christoph Lameter <cl@...ux.com>,
Pekka Enberg <penberg@...nel.org>,
"David Rientjes" <rientjes@...gle.com>,
Joonsoo Kim <iamjoonsoo.kim@....com>,
"Andrew Morton" <akpm@...ux-foundation.org>
CC: <liu.xiang6@....com.cn>, Abel Wu <wuyun.wu@...wei.com>,
"open list:SLAB ALLOCATOR" <linux-mm@...ck.org>,
open list <linux-kernel@...r.kernel.org>
Subject: [PATCH] mm/slub: remove useless kmem_cache_debug
From: Abel Wu <wuyun.wu@...wei.com>
The commit below is incomplete, as it didn't handle the add_full() part.
commit a4d3f8916c65 ("slub: remove useless kmem_cache_debug() before remove_full()")
Signed-off-by: Abel Wu <wuyun.wu@...wei.com>
---
mm/slub.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/mm/slub.c b/mm/slub.c
index fe81773..0b021b7 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -2182,7 +2182,8 @@ static void deactivate_slab(struct kmem_cache *s, struct page *page,
}
} else {
m = M_FULL;
- if (kmem_cache_debug(s) && !lock) {
+#ifdef CONFIG_SLUB_DEBUG
+ if (!lock) {
lock = 1;
/*
* This also ensures that the scanning of full
@@ -2191,6 +2192,7 @@ static void deactivate_slab(struct kmem_cache *s, struct page *page,
*/
spin_lock(&n->list_lock);
}
+#endif
}
if (l != m) {
--
1.8.3.1
Powered by blists - more mailing lists