[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ef97428b-f6e7-481e-b47e-375cc76653ad@suse.cz>
Date: Tue, 25 Feb 2025 10:57:38 +0100
From: Vlastimil Babka <vbabka@...e.cz>
To: Uladzislau Rezki <urezki@...il.com>, Keith Busch <kbusch@...nel.org>
Cc: "Paul E. McKenney" <paulmck@...nel.org>,
Joel Fernandes <joel@...lfernandes.org>,
Josh Triplett <josh@...htriplett.org>, Boqun Feng <boqun.feng@...il.com>,
Christoph Lameter <cl@...ux.com>, David Rientjes <rientjes@...gle.com>,
Steven Rostedt <rostedt@...dmis.org>,
Mathieu Desnoyers <mathieu.desnoyers@...icios.com>,
Lai Jiangshan <jiangshanlai@...il.com>, Zqiang <qiang.zhang1211@...il.com>,
Julia Lawall <Julia.Lawall@...ia.fr>, Jakub Kicinski <kuba@...nel.org>,
"Jason A. Donenfeld" <Jason@...c4.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Roman Gushchin <roman.gushchin@...ux.dev>,
Hyeonggon Yoo <42.hyeyoo@...il.com>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, rcu@...r.kernel.org,
Alexander Potapenko <glider@...gle.com>, Marco Elver <elver@...gle.com>,
Dmitry Vyukov <dvyukov@...gle.com>, kasan-dev@...glegroups.com,
Jann Horn <jannh@...gle.com>, Mateusz Guzik <mjguzik@...il.com>,
linux-nvme@...ts.infradead.org, leitao@...ian.org
Subject: Re: [PATCH v2 6/7] mm, slab: call kvfree_rcu_barrier() from
kmem_cache_destroy()
On 2/24/25 12:44, Uladzislau Rezki wrote:
> On Fri, Feb 21, 2025 at 06:28:49PM +0100, Vlastimil Babka wrote:
>> On 2/21/25 17:30, Keith Busch wrote:
>> > On Wed, Aug 07, 2024 at 12:31:19PM +0200, Vlastimil Babka wrote:
>> >> We would like to replace call_rcu() users with kfree_rcu() where the
>> >> existing callback is just a kmem_cache_free(). However this causes
>> >> issues when the cache can be destroyed (such as due to module unload).
>> >>
>> >> Currently such modules should be issuing rcu_barrier() before
>> >> kmem_cache_destroy() to have their call_rcu() callbacks processed first.
>> >> This barrier is however not sufficient for kfree_rcu() in flight due
>> >> to the batching introduced by a35d16905efc ("rcu: Add basic support for
>> >> kfree_rcu() batching").
>> >>
>> >> This is not a problem for kmalloc caches which are never destroyed, but
>> >> since removing SLOB, kfree_rcu() is allowed also for any other cache,
>> >> that might be destroyed.
>> >>
>> >> In order not to complicate the API, put the responsibility for handling
>> >> outstanding kfree_rcu() in kmem_cache_destroy() itself. Use the newly
>> >> introduced kvfree_rcu_barrier() to wait before destroying the cache.
>> >> This is similar to how we issue rcu_barrier() for SLAB_TYPESAFE_BY_RCU
>> >> caches, but has to be done earlier, as the latter only needs to wait for
>> >> the empty slab pages to finish freeing, and not objects from the slab.
>> >>
>> >> Users of call_rcu() with arbitrary callbacks should still issue
>> >> rcu_barrier() before destroying the cache and unloading the module, as
>> >> kvfree_rcu_barrier() is not a superset of rcu_barrier() and the
>> >> callbacks may be invoking module code or performing other actions that
>> >> are necessary for a successful unload.
>> >>
>> >> Signed-off-by: Vlastimil Babka <vbabka@...e.cz>
>> >> ---
>> >> mm/slab_common.c | 3 +++
>> >> 1 file changed, 3 insertions(+)
>> >>
>> >> diff --git a/mm/slab_common.c b/mm/slab_common.c
>> >> index c40227d5fa07..1a2873293f5d 100644
>> >> --- a/mm/slab_common.c
>> >> +++ b/mm/slab_common.c
>> >> @@ -508,6 +508,9 @@ void kmem_cache_destroy(struct kmem_cache *s)
>> >> if (unlikely(!s) || !kasan_check_byte(s))
>> >> return;
>> >>
>> >> + /* in-flight kfree_rcu()'s may include objects from our cache */
>> >> + kvfree_rcu_barrier();
>> >> +
>> >> cpus_read_lock();
>> >> mutex_lock(&slab_mutex);
>> >
>> > This patch appears to be triggering a new warning in certain conditions
>> > when tearing down an nvme namespace's block device. Stack trace is at
>> > the end.
>> >
>> > The warning indicates that this shouldn't be called from a
>> > WQ_MEM_RECLAIM workqueue. This workqueue is responsible for bringing up
>> > and tearing down block devices, so this is a memory reclaim use AIUI.
>> > I'm a bit confused why we can't tear down a disk from within a memory
>> > reclaim workqueue. Is the recommended solution to simply remove the WQ
>> > flag when creating the workqueue?
>>
>> I think it's reasonable to expect a memory reclaim related action would
>> destroy a kmem cache. Mateusz's suggestion would work around the issue, but
>> then we could get another surprising warning elsewhere. Also making the
>> kmem_cache destroys async can be tricky when a recreation happens
>> immediately under the same name (implications with sysfs/debugfs etc). We
>> managed to make the destroying synchronous as part of this series and it
>> would be great to keep it that way.
>>
>> > ------------[ cut here ]------------
>> > workqueue: WQ_MEM_RECLAIM nvme-wq:nvme_scan_work is flushing !WQ_MEM_RECLAIM events_unbound:kfree_rcu_work
>>
>> Maybe instead kfree_rcu_work should be using a WQ_MEM_RECLAIM workqueue? It
>> is after all freeing memory. Ulad, what do you think?
>>
> We reclaim memory, therefore WQ_MEM_RECLAIM seems what we need.
> AFAIR, there is an extra rescue worker, which can really help
> under a low memory condition in a way that we do a progress.
>
> Do we have a reproducer of mentioned splat?
I tried to create a kunit test for it, but it doesn't trigger anything. Maybe
it's too simple, or racy, and thus we are not flushing any of the queues from
kvfree_rcu_barrier()?
----8<----
>From 1e19ea78e7fe254034970f75e3b7cb705be50163 Mon Sep 17 00:00:00 2001
From: Vlastimil Babka <vbabka@...e.cz>
Date: Tue, 25 Feb 2025 10:51:28 +0100
Subject: [PATCH] add test for kmem_cache_destroy in a workqueue
---
lib/slub_kunit.c | 48 ++++++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 48 insertions(+)
diff --git a/lib/slub_kunit.c b/lib/slub_kunit.c
index f11691315c2f..5fe9775fba05 100644
--- a/lib/slub_kunit.c
+++ b/lib/slub_kunit.c
@@ -6,6 +6,7 @@
#include <linux/module.h>
#include <linux/kernel.h>
#include <linux/rcupdate.h>
+#include <linux/delay.h>
#include "../mm/slab.h"
static struct kunit_resource resource;
@@ -181,6 +182,52 @@ static void test_kfree_rcu(struct kunit *test)
KUNIT_EXPECT_EQ(test, 0, slab_errors);
}
+struct cache_destroy_work {
+ struct work_struct work;
+ struct kmem_cache *s;
+};
+
+static void cache_destroy_workfn(struct work_struct *w)
+{
+ struct cache_destroy_work *cdw;
+
+ cdw = container_of(w, struct cache_destroy_work, work);
+
+ kmem_cache_destroy(cdw->s);
+}
+
+static void test_kfree_rcu_wq_destroy(struct kunit *test)
+{
+ struct test_kfree_rcu_struct *p;
+ struct cache_destroy_work cdw;
+ struct workqueue_struct *wq;
+ struct kmem_cache *s;
+
+ if (IS_BUILTIN(CONFIG_SLUB_KUNIT_TEST))
+ kunit_skip(test, "can't do kfree_rcu() when test is built-in");
+
+ INIT_WORK_ONSTACK(&cdw.work, cache_destroy_workfn);
+ wq = alloc_workqueue("test_kfree_rcu_destroy_wq", WQ_UNBOUND | WQ_MEM_RECLAIM, 0);
+ if (!wq)
+ kunit_skip(test, "failed to alloc wq");
+
+ s = test_kmem_cache_create("TestSlub_kfree_rcu_wq_destroy",
+ sizeof(struct test_kfree_rcu_struct),
+ SLAB_NO_MERGE);
+ p = kmem_cache_alloc(s, GFP_KERNEL);
+
+ kfree_rcu(p, rcu);
+
+ cdw.s = s;
+ queue_work(wq, &cdw.work);
+ msleep(1000);
+ flush_work(&cdw.work);
+
+ destroy_workqueue(wq);
+
+ KUNIT_EXPECT_EQ(test, 0, slab_errors);
+}
+
static void test_leak_destroy(struct kunit *test)
{
struct kmem_cache *s = test_kmem_cache_create("TestSlub_leak_destroy",
@@ -254,6 +301,7 @@ static struct kunit_case test_cases[] = {
KUNIT_CASE(test_clobber_redzone_free),
KUNIT_CASE(test_kmalloc_redzone_access),
KUNIT_CASE(test_kfree_rcu),
+ KUNIT_CASE(test_kfree_rcu_wq_destroy),
KUNIT_CASE(test_leak_destroy),
KUNIT_CASE(test_krealloc_redzone_zeroing),
{}
--
2.48.1
Powered by blists - more mailing lists