[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <82e8147e-f031-6bc2-9395-56d2052e62cb@suse.cz>
Date: Fri, 21 Oct 2022 12:43:42 +0200
From: Vlastimil Babka <vbabka@...e.cz>
To: Hyeonggon Yoo <42.hyeyoo@...il.com>,
Christoph Lameter <cl@...ux.com>,
Pekka Enberg <penberg@...nel.org>,
David Rientjes <rientjes@...gle.com>,
Joonsoo Kim <iamjoonsoo.kim@....com>,
Andrew Morton <akpm@...ux-foundation.org>,
Roman Gushchin <roman.gushchin@...ux.dev>
Cc: linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] mm/slub: remove dead code for debug caches on
deactivate_slab()
On 10/14/22 13:43, Hyeonggon Yoo wrote:
> After commit c7323a5ad0786 ("mm/slub: restrict sysfs validation to debug
> caches and make it safe"), SLUB does not take a slab from partial list for
I'm confused by "SLUB does not take a slab from partial list" here. Did you
mean something like "SLUB never installs (even temporarily) a percpu slab
for debug caches"? So that means we never deactivate percpu slabs for debug
caches. And since debug caches are also the only ones that use the full
list, we no longer need to care about the full list in deactivate_slab(), right?
> debug caches. As deactivation isn't needed anymore, remove dead code.
>
> Signed-off-by: Hyeonggon Yoo <42.hyeyoo@...il.com>
Otherwise it looks correct to me, just wanted to clarify I'm not missing
something.
> ---
> mm/slub.c | 16 ++--------------
> 1 file changed, 2 insertions(+), 14 deletions(-)
>
> diff --git a/mm/slub.c b/mm/slub.c
> index 96dd392d7f99..e2215240954d 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -2411,7 +2411,7 @@ static void init_kmem_cache_cpus(struct kmem_cache *s)
> static void deactivate_slab(struct kmem_cache *s, struct slab *slab,
> void *freelist)
> {
> - enum slab_modes { M_NONE, M_PARTIAL, M_FULL, M_FREE, M_FULL_NOLIST };
> + enum slab_modes { M_NONE, M_PARTIAL, M_FREE, M_FULL_NOLIST };
> struct kmem_cache_node *n = get_node(s, slab_nid(slab));
> int free_delta = 0;
> enum slab_modes mode = M_NONE;
> @@ -2487,14 +2487,6 @@ static void deactivate_slab(struct kmem_cache *s, struct slab *slab,
> * acquire_slab() will see a slab that is frozen
> */
> spin_lock_irqsave(&n->list_lock, flags);
> - } else if (kmem_cache_debug_flags(s, SLAB_STORE_USER)) {
> - mode = M_FULL;
> - /*
> - * This also ensures that the scanning of full
> - * slabs from diagnostic functions will not see
> - * any frozen slabs.
> - */
> - spin_lock_irqsave(&n->list_lock, flags);
> } else {
> mode = M_FULL_NOLIST;
> }
> @@ -2504,7 +2496,7 @@ static void deactivate_slab(struct kmem_cache *s, struct slab *slab,
> old.freelist, old.counters,
> new.freelist, new.counters,
> "unfreezing slab")) {
> - if (mode == M_PARTIAL || mode == M_FULL)
> + if (mode == M_PARTIAL)
> spin_unlock_irqrestore(&n->list_lock, flags);
> goto redo;
> }
> @@ -2518,10 +2510,6 @@ static void deactivate_slab(struct kmem_cache *s, struct slab *slab,
> stat(s, DEACTIVATE_EMPTY);
> discard_slab(s, slab);
> stat(s, FREE_SLAB);
> - } else if (mode == M_FULL) {
> - add_full(s, n, slab);
> - spin_unlock_irqrestore(&n->list_lock, flags);
> - stat(s, DEACTIVATE_FULL);
> } else if (mode == M_FULL_NOLIST) {
> stat(s, DEACTIVATE_FULL);
> }
Powered by blists - more mailing lists