[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAG48ez2jKFXxkMhq-Q7-WNHp_FTYL7yOpCQa8e_yFDm05e3Few@mail.gmail.com>
Date: Wed, 7 Aug 2024 21:11:55 +0200
From: Jann Horn <jannh@...gle.com>
To: Vlastimil Babka <vbabka@...e.cz>
Cc: "Paul E. McKenney" <paulmck@...nel.org>, Joel Fernandes <joel@...lfernandes.org>,
Josh Triplett <josh@...htriplett.org>, Boqun Feng <boqun.feng@...il.com>,
Christoph Lameter <cl@...ux.com>, David Rientjes <rientjes@...gle.com>, Steven Rostedt <rostedt@...dmis.org>,
Mathieu Desnoyers <mathieu.desnoyers@...icios.com>, Lai Jiangshan <jiangshanlai@...il.com>,
Zqiang <qiang.zhang1211@...il.com>, Julia Lawall <Julia.Lawall@...ia.fr>,
Jakub Kicinski <kuba@...nel.org>, "Jason A. Donenfeld" <Jason@...c4.com>,
"Uladzislau Rezki (Sony)" <urezki@...il.com>, Andrew Morton <akpm@...ux-foundation.org>,
Roman Gushchin <roman.gushchin@...ux.dev>, Hyeonggon Yoo <42.hyeyoo@...il.com>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, rcu@...r.kernel.org,
Alexander Potapenko <glider@...gle.com>, Marco Elver <elver@...gle.com>, Dmitry Vyukov <dvyukov@...gle.com>,
kasan-dev@...glegroups.com, Mateusz Guzik <mjguzik@...il.com>
Subject: Re: [PATCH v2 4/7] mm, slab: reintroduce rcu_barrier() into kmem_cache_destroy()
On Wed, Aug 7, 2024 at 12:31 PM Vlastimil Babka <vbabka@...e.cz> wrote:
> There used to be a rcu_barrier() for SLAB_TYPESAFE_BY_RCU caches in
> kmem_cache_destroy() until commit 657dc2f97220 ("slab: remove
> synchronous rcu_barrier() call in memcg cache release path") moved it to
> an asynchronous work that finishes the destroying of such caches.
>
> The motivation for that commit was the MEMCG_KMEM integration that at
> the time created and removed clones of the global slab caches together
> with their cgroups, and blocking cgroups removal was unwelcome. The
> implementation later changed to per-object memcg tracking using a single
> cache, so there should be no more need for a fast non-blocking
> kmem_cache_destroy(), which is typically only done when a module is
> unloaded etc.
>
> Going back to synchronous barrier has the following advantages:
>
> - simpler implementation
> - it's easier to test the result of kmem_cache_destroy() in a kunit test
>
> Thus effectively revert commit 657dc2f97220. It is not a 1:1 revert as
> the code has changed since. The main part is that kmem_cache_release(s)
> is always called from kmem_cache_destroy(), but for SLAB_TYPESAFE_BY_RCU
> caches there's a rcu_barrier() first.
>
> Suggested-by: Mateusz Guzik <mjguzik@...il.com>
> Signed-off-by: Vlastimil Babka <vbabka@...e.cz>
Reviewed-by: Jann Horn <jannh@...gle.com>
Powered by blists - more mailing lists