lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20240807-b4-slab-kfree_rcu-destroy-v2-4-ea79102f428c@suse.cz>
Date: Wed, 07 Aug 2024 12:31:17 +0200
From: Vlastimil Babka <vbabka@...e.cz>
To: "Paul E. McKenney" <paulmck@...nel.org>, 
 Joel Fernandes <joel@...lfernandes.org>, 
 Josh Triplett <josh@...htriplett.org>, Boqun Feng <boqun.feng@...il.com>, 
 Christoph Lameter <cl@...ux.com>, David Rientjes <rientjes@...gle.com>
Cc: Steven Rostedt <rostedt@...dmis.org>, 
 Mathieu Desnoyers <mathieu.desnoyers@...icios.com>, 
 Lai Jiangshan <jiangshanlai@...il.com>, Zqiang <qiang.zhang1211@...il.com>, 
 Julia Lawall <Julia.Lawall@...ia.fr>, Jakub Kicinski <kuba@...nel.org>, 
 "Jason A. Donenfeld" <Jason@...c4.com>, 
 "Uladzislau Rezki (Sony)" <urezki@...il.com>, 
 Andrew Morton <akpm@...ux-foundation.org>, 
 Roman Gushchin <roman.gushchin@...ux.dev>, 
 Hyeonggon Yoo <42.hyeyoo@...il.com>, linux-mm@...ck.org, 
 linux-kernel@...r.kernel.org, rcu@...r.kernel.org, 
 Alexander Potapenko <glider@...gle.com>, Marco Elver <elver@...gle.com>, 
 Dmitry Vyukov <dvyukov@...gle.com>, kasan-dev@...glegroups.com, 
 Jann Horn <jannh@...gle.com>, Mateusz Guzik <mjguzik@...il.com>, 
 Vlastimil Babka <vbabka@...e.cz>
Subject: [PATCH v2 4/7] mm, slab: reintroduce rcu_barrier() into
 kmem_cache_destroy()

There used to be a rcu_barrier() for SLAB_TYPESAFE_BY_RCU caches in
kmem_cache_destroy() until commit 657dc2f97220 ("slab: remove
synchronous rcu_barrier() call in memcg cache release path") moved it to
an asynchronous work that finishes the destroying of such caches.

The motivation for that commit was the MEMCG_KMEM integration that at
the time created and removed clones of the global slab caches together
with their cgroups, and blocking cgroups removal was unwelcome. The
implementation later changed to per-object memcg tracking using a single
cache, so there should be no more need for a fast non-blocking
kmem_cache_destroy(), which is typically only done when a module is
unloaded etc.

Going back to synchronous barrier has the following advantages:

- simpler implementation
- it's easier to test the result of kmem_cache_destroy() in a kunit test

Thus effectively revert commit 657dc2f97220. It is not a 1:1 revert as
the code has changed since. The main part is that kmem_cache_release(s)
is always called from kmem_cache_destroy(), but for SLAB_TYPESAFE_BY_RCU
caches there's a rcu_barrier() first.

Suggested-by: Mateusz Guzik <mjguzik@...il.com>
Signed-off-by: Vlastimil Babka <vbabka@...e.cz>
---
 mm/slab_common.c | 47 ++++-------------------------------------------
 1 file changed, 4 insertions(+), 43 deletions(-)

diff --git a/mm/slab_common.c b/mm/slab_common.c
index a079b8540334..c40227d5fa07 100644
--- a/mm/slab_common.c
+++ b/mm/slab_common.c
@@ -40,11 +40,6 @@ LIST_HEAD(slab_caches);
 DEFINE_MUTEX(slab_mutex);
 struct kmem_cache *kmem_cache;
 
-static LIST_HEAD(slab_caches_to_rcu_destroy);
-static void slab_caches_to_rcu_destroy_workfn(struct work_struct *work);
-static DECLARE_WORK(slab_caches_to_rcu_destroy_work,
-		    slab_caches_to_rcu_destroy_workfn);
-
 /*
  * Set of flags that will prevent slab merging
  */
@@ -499,33 +494,6 @@ static void kmem_cache_release(struct kmem_cache *s)
 		slab_kmem_cache_release(s);
 }
 
-static void slab_caches_to_rcu_destroy_workfn(struct work_struct *work)
-{
-	LIST_HEAD(to_destroy);
-	struct kmem_cache *s, *s2;
-
-	/*
-	 * On destruction, SLAB_TYPESAFE_BY_RCU kmem_caches are put on the
-	 * @slab_caches_to_rcu_destroy list.  The slab pages are freed
-	 * through RCU and the associated kmem_cache are dereferenced
-	 * while freeing the pages, so the kmem_caches should be freed only
-	 * after the pending RCU operations are finished.  As rcu_barrier()
-	 * is a pretty slow operation, we batch all pending destructions
-	 * asynchronously.
-	 */
-	mutex_lock(&slab_mutex);
-	list_splice_init(&slab_caches_to_rcu_destroy, &to_destroy);
-	mutex_unlock(&slab_mutex);
-
-	if (list_empty(&to_destroy))
-		return;
-
-	rcu_barrier();
-
-	list_for_each_entry_safe(s, s2, &to_destroy, list)
-		kmem_cache_release(s);
-}
-
 void slab_kmem_cache_release(struct kmem_cache *s)
 {
 	__kmem_cache_release(s);
@@ -535,7 +503,6 @@ void slab_kmem_cache_release(struct kmem_cache *s)
 
 void kmem_cache_destroy(struct kmem_cache *s)
 {
-	bool rcu_set;
 	int err;
 
 	if (unlikely(!s) || !kasan_check_byte(s))
@@ -551,8 +518,6 @@ void kmem_cache_destroy(struct kmem_cache *s)
 		return;
 	}
 
-	rcu_set = s->flags & SLAB_TYPESAFE_BY_RCU;
-
 	/* free asan quarantined objects */
 	kasan_cache_shutdown(s);
 
@@ -572,14 +537,10 @@ void kmem_cache_destroy(struct kmem_cache *s)
 	if (err)
 		return;
 
-	if (rcu_set) {
-		mutex_lock(&slab_mutex);
-		list_add_tail(&s->list, &slab_caches_to_rcu_destroy);
-		schedule_work(&slab_caches_to_rcu_destroy_work);
-		mutex_unlock(&slab_mutex);
-	} else {
-		kmem_cache_release(s);
-	}
+	if (s->flags & SLAB_TYPESAFE_BY_RCU)
+		rcu_barrier();
+
+	kmem_cache_release(s);
 }
 EXPORT_SYMBOL(kmem_cache_destroy);
 

-- 
2.46.0


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ