[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20240807-b4-slab-kfree_rcu-destroy-v2-3-ea79102f428c@suse.cz>
Date: Wed, 07 Aug 2024 12:31:16 +0200
From: Vlastimil Babka <vbabka@...e.cz>
To: "Paul E. McKenney" <paulmck@...nel.org>,
Joel Fernandes <joel@...lfernandes.org>,
Josh Triplett <josh@...htriplett.org>, Boqun Feng <boqun.feng@...il.com>,
Christoph Lameter <cl@...ux.com>, David Rientjes <rientjes@...gle.com>
Cc: Steven Rostedt <rostedt@...dmis.org>,
Mathieu Desnoyers <mathieu.desnoyers@...icios.com>,
Lai Jiangshan <jiangshanlai@...il.com>, Zqiang <qiang.zhang1211@...il.com>,
Julia Lawall <Julia.Lawall@...ia.fr>, Jakub Kicinski <kuba@...nel.org>,
"Jason A. Donenfeld" <Jason@...c4.com>,
"Uladzislau Rezki (Sony)" <urezki@...il.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Roman Gushchin <roman.gushchin@...ux.dev>,
Hyeonggon Yoo <42.hyeyoo@...il.com>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, rcu@...r.kernel.org,
Alexander Potapenko <glider@...gle.com>, Marco Elver <elver@...gle.com>,
Dmitry Vyukov <dvyukov@...gle.com>, kasan-dev@...glegroups.com,
Jann Horn <jannh@...gle.com>, Mateusz Guzik <mjguzik@...il.com>,
Vlastimil Babka <vbabka@...e.cz>
Subject: [PATCH v2 3/7] mm, slab: move kfence_shutdown_cache() outside
slab_mutex
kfence_shutdown_cache() is called under slab_mutex when the cache is
destroyed synchronously, and outside slab_mutex during the delayed
destruction of SLAB_TYPESAFE_BY_RCU caches.
It seems it should always be safe to call it outside of slab_mutex so we
can just move the call to kmem_cache_release(), which is called outside.
Signed-off-by: Vlastimil Babka <vbabka@...e.cz>
---
mm/slab_common.c | 8 ++------
1 file changed, 2 insertions(+), 6 deletions(-)
diff --git a/mm/slab_common.c b/mm/slab_common.c
index db61df3b4282..a079b8540334 100644
--- a/mm/slab_common.c
+++ b/mm/slab_common.c
@@ -492,6 +492,7 @@ EXPORT_SYMBOL(kmem_buckets_create);
*/
static void kmem_cache_release(struct kmem_cache *s)
{
+ kfence_shutdown_cache(s);
if (__is_defined(SLAB_SUPPORTS_SYSFS) && slab_state >= FULL)
sysfs_slab_release(s);
else
@@ -521,10 +522,8 @@ static void slab_caches_to_rcu_destroy_workfn(struct work_struct *work)
rcu_barrier();
- list_for_each_entry_safe(s, s2, &to_destroy, list) {
- kfence_shutdown_cache(s);
+ list_for_each_entry_safe(s, s2, &to_destroy, list)
kmem_cache_release(s);
- }
}
void slab_kmem_cache_release(struct kmem_cache *s)
@@ -563,9 +562,6 @@ void kmem_cache_destroy(struct kmem_cache *s)
list_del(&s->list);
- if (!err && !rcu_set)
- kfence_shutdown_cache(s);
-
mutex_unlock(&slab_mutex);
cpus_read_unlock();
--
2.46.0
Powered by blists - more mailing lists