lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Sat, 22 Oct 2022 13:14:10 +0900
From:   Hyeonggon Yoo <42.hyeyoo@...il.com>
To:     Vlastimil Babka <vbabka@...e.cz>
Cc:     Christoph Lameter <cl@...ux.com>,
        Pekka Enberg <penberg@...nel.org>,
        David Rientjes <rientjes@...gle.com>,
        Joonsoo Kim <iamjoonsoo.kim@....com>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Roman Gushchin <roman.gushchin@...ux.dev>, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH] mm/slub: remove dead code for debug caches on
 deactivate_slab()

On Fri, Oct 21, 2022 at 12:43:42PM +0200, Vlastimil Babka wrote:
> On 10/14/22 13:43, Hyeonggon Yoo wrote:
> > After commit c7323a5ad0786 ("mm/slub: restrict sysfs validation to debug
> > caches and make it safe"), SLUB does not take a slab from partial list for
> 
> I'm confused by "SLUB does not take a slab from partial list" here. Did you
> mean something like "SLUB never installs (even temporarily) a percpu slab
> for debug caches"?

Yes.

> So that means we never deactivate percpu slabs for debug
> caches.

Yes.

> And since debug caches are also the only ones that use the full
> list, we no longer need to care about the full list in deactivate_slab(), right?

Yes, You got it right, exactly!

Let me rephrase:

"After commit c7323a5ad0786 ("mm/slub: restrict sysfs validation to debug
caches and make it safe"), SLUB never installs percpu slab for debug caches
and thus never deactivates percpu slab for them.

Since only some of debug caches care about the full list, SLUB no longer
deactivates to full list. Remove dead code in deactivate_slab()."


Feel free to change this ;-)

> 
> > debug caches. As deactivation isn't needed anymore, remove dead code.
> > 
> > Signed-off-by: Hyeonggon Yoo <42.hyeyoo@...il.com>
> 
> Otherwise it looks correct to me, just wanted to clarify I'm not missing
> something.

You are not missing anything.
Thank you for clarification.

Hyeonggon

> 
> > ---
> >  mm/slub.c | 16 ++--------------
> >  1 file changed, 2 insertions(+), 14 deletions(-)
> > 
> > diff --git a/mm/slub.c b/mm/slub.c
> > index 96dd392d7f99..e2215240954d 100644
> > --- a/mm/slub.c
> > +++ b/mm/slub.c
> > @@ -2411,7 +2411,7 @@ static void init_kmem_cache_cpus(struct kmem_cache *s)
> >  static void deactivate_slab(struct kmem_cache *s, struct slab *slab,
> >  			    void *freelist)
> >  {
> > -	enum slab_modes { M_NONE, M_PARTIAL, M_FULL, M_FREE, M_FULL_NOLIST };
> > +	enum slab_modes { M_NONE, M_PARTIAL, M_FREE, M_FULL_NOLIST };
> >  	struct kmem_cache_node *n = get_node(s, slab_nid(slab));
> >  	int free_delta = 0;
> >  	enum slab_modes mode = M_NONE;
> > @@ -2487,14 +2487,6 @@ static void deactivate_slab(struct kmem_cache *s, struct slab *slab,
> >  		 * acquire_slab() will see a slab that is frozen
> >  		 */
> >  		spin_lock_irqsave(&n->list_lock, flags);
> > -	} else if (kmem_cache_debug_flags(s, SLAB_STORE_USER)) {
> > -		mode = M_FULL;
> > -		/*
> > -		 * This also ensures that the scanning of full
> > -		 * slabs from diagnostic functions will not see
> > -		 * any frozen slabs.
> > -		 */
> > -		spin_lock_irqsave(&n->list_lock, flags);
> >  	} else {
> >  		mode = M_FULL_NOLIST;
> >  	}
> > @@ -2504,7 +2496,7 @@ static void deactivate_slab(struct kmem_cache *s, struct slab *slab,
> >  				old.freelist, old.counters,
> >  				new.freelist, new.counters,
> >  				"unfreezing slab")) {
> > -		if (mode == M_PARTIAL || mode == M_FULL)
> > +		if (mode == M_PARTIAL)
> >  			spin_unlock_irqrestore(&n->list_lock, flags);
> >  		goto redo;
> >  	}
> > @@ -2518,10 +2510,6 @@ static void deactivate_slab(struct kmem_cache *s, struct slab *slab,
> >  		stat(s, DEACTIVATE_EMPTY);
> >  		discard_slab(s, slab);
> >  		stat(s, FREE_SLAB);
> > -	} else if (mode == M_FULL) {
> > -		add_full(s, n, slab);
> > -		spin_unlock_irqrestore(&n->list_lock, flags);
> > -		stat(s, DEACTIVATE_FULL);
> >  	} else if (mode == M_FULL_NOLIST) {
> >  		stat(s, DEACTIVATE_FULL);
> >  	}
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ