lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YibSe/txAsubzqUw@ip-172-31-19-208.ap-northeast-1.compute.internal>
Date:   Tue, 8 Mar 2022 03:50:19 +0000
From:   Hyeonggon Yoo <42.hyeyoo@...il.com>
To:     Xiongwei Song <sxwjean@...il.com>
Cc:     linux-mm@...ck.org, Christoph Lameter <cl@...ux.com>,
        Pekka Enberg <penberg@...nel.org>,
        David Rientjes <rientjes@...gle.com>,
        Joonsoo Kim <iamjoonsoo.kim@....com>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Vlastimil Babka <vbabka@...e.cz>,
        Marco Elver <elver@...gle.com>,
        Matthew WilCox <willy@...radead.org>,
        Roman Gushchin <roman.gushchin@...ux.dev>,
        Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v3 2/2] mm/slub: refactor deactivate_slab()

On Tue, Mar 08, 2022 at 09:40:07AM +0800, Xiongwei Song wrote:
> Hello,
> 
> On Mon, Mar 7, 2022 at 3:41 PM Hyeonggon Yoo <42.hyeyoo@...il.com> wrote:
> >
> > Simplify deactivate_slab() by unlocking n->list_lock and retrying
> > cmpxchg_double() when cmpxchg_double() fails, and perform
> > add_{partial,full} only when it succeed.
> >
> > Releasing and taking n->list_lock again here is not harmful as SLUB
> > avoids deactivating slabs as much as possible.
> >
> > [ vbabka@...e.cz: perform add_{partial,full} when cmpxchg_double()
> >   succeed.
> >
> >   count deactivating full slabs even if debugging flag is not set. ]
> >
> > Signed-off-by: Hyeonggon Yoo <42.hyeyoo@...il.com>
> > ---
> >  mm/slub.c | 91 +++++++++++++++++++++++--------------------------------
> >  1 file changed, 38 insertions(+), 53 deletions(-)
> >
> > diff --git a/mm/slub.c b/mm/slub.c
> > index 1ce09b0347ad..f0cb9d0443ac 100644
> > --- a/mm/slub.c
> > +++ b/mm/slub.c
> > @@ -2348,10 +2348,10 @@ static void init_kmem_cache_cpus(struct kmem_cache *s)
> >  static void deactivate_slab(struct kmem_cache *s, struct slab *slab,
> >                             void *freelist)
> >  {
> > -       enum slab_modes { M_NONE, M_PARTIAL, M_FULL, M_FREE };
> > +       enum slab_modes { M_NONE, M_PARTIAL, M_FULL, M_FREE, M_FULL_NOLIST };
> >         struct kmem_cache_node *n = get_node(s, slab_nid(slab));
> > -       int lock = 0, free_delta = 0;
> > -       enum slab_modes l = M_NONE, m = M_NONE;
> > +       int free_delta = 0;
> > +       enum slab_modes mode = M_NONE;
> >         void *nextfree, *freelist_iter, *freelist_tail;
> >         int tail = DEACTIVATE_TO_HEAD;
> >         unsigned long flags = 0;
> > @@ -2393,14 +2393,10 @@ static void deactivate_slab(struct kmem_cache *s, struct slab *slab,
> >          * Ensure that the slab is unfrozen while the list presence
> >          * reflects the actual number of objects during unfreeze.
> >          *
> > -        * We setup the list membership and then perform a cmpxchg
> > -        * with the count. If there is a mismatch then the slab
> > -        * is not unfrozen but the slab is on the wrong list.
> > -        *
> > -        * Then we restart the process which may have to remove
> > -        * the slab from the list that we just put it on again
> > -        * because the number of objects in the slab may have
> > -        * changed.
> > +        * We first perform cmpxchg holding lock and insert to list
> > +        * when it succeed. If there is mismatch then the slab is not
> > +        * unfrozen and number of objects in the slab may have changed.
> > +        * Then release lock and retry cmpxchg again.
> >          */
> >  redo:
> >
> > @@ -2420,61 +2416,50 @@ static void deactivate_slab(struct kmem_cache *s, struct slab *slab,
> >         new.frozen = 0;
> >
> >         if (!new.inuse && n->nr_partial >= s->min_partial)
> > -               m = M_FREE;
> > +               mode = M_FREE;
> >         else if (new.freelist) {
> > -               m = M_PARTIAL;
> > -               if (!lock) {
> > -                       lock = 1;
> > -                       /*
> > -                        * Taking the spinlock removes the possibility that
> > -                        * acquire_slab() will see a slab that is frozen
> > -                        */
> > -                       spin_lock_irqsave(&n->list_lock, flags);
> > -               }
> > -       } else {
> > -               m = M_FULL;
> > -               if (kmem_cache_debug_flags(s, SLAB_STORE_USER) && !lock) {
> > -                       lock = 1;
> > -                       /*
> > -                        * This also ensures that the scanning of full
> > -                        * slabs from diagnostic functions will not see
> > -                        * any frozen slabs.
> > -                        */
> > -                       spin_lock_irqsave(&n->list_lock, flags);
> > -               }
> > -       }
> > -
> > -       if (l != m) {
> > -               if (l == M_PARTIAL)
> > -                       remove_partial(n, slab);
> > -               else if (l == M_FULL)
> > -                       remove_full(s, n, slab);
> > +               mode = M_PARTIAL;
> > +               /*
> > +                * Taking the spinlock removes the possibility that
> > +                * acquire_slab() will see a slab that is frozen
> > +                */
> > +               spin_lock_irqsave(&n->list_lock, flags);
> > +       } else if (kmem_cache_debug_flags(s, SLAB_STORE_USER)) {
> > +               mode = M_FULL;
> > +               /*
> > +                * This also ensures that the scanning of full
> > +                * slabs from diagnostic functions will not see
> > +                * any frozen slabs.
> > +                */
> > +               spin_lock_irqsave(&n->list_lock, flags);
> > +       } else
> > +               mode = M_FULL_NOLIST;
> >
> > -               if (m == M_PARTIAL)
> > -                       add_partial(n, slab, tail);
> > -               else if (m == M_FULL)
i> > -                       add_full(s, n, slab);
> > -       }
> >
> > -       l = m;
> >         if (!cmpxchg_double_slab(s, slab,
> >                                 old.freelist, old.counters,
> >                                 new.freelist, new.counters,
> > -                               "unfreezing slab"))
> > +                               "unfreezing slab")) {
> > +               if (mode == M_PARTIAL || mode == M_FULL)
> > +                       spin_unlock_irqrestore(&n->list_lock, flags);
> 
> The slab doesn't belong to any node here, should we remove locking/unlocking
> spin for cmpxchg_double_slab() call? Just calling spin_lock_irqsave() before
> add_partial()/add_full call is fine?
>

I thought about that, and tested, but that is not okay.

taking spinlock around cmpxchg prevents race between __slab_free() and
deactivate_slab(). list can be corrupted without spinlock.


think about case below: (when SLAB_STORE_USER is set)

__slab_free()			deactivate_slab()
=================		=================
				(deactivating full slab)
				cmpxchg_double()


spin_lock_irqsave()
cmpxchg_double()		

/* not in full list yet */
remove_full()
add_partial()
spin_unlock_irqrestore()
				spin_lock_irqsave()
				add_full()			
				spin_unlock_irqrestore()



> >                 goto redo;
> 
> How about do {...} while(!cmpxchg_double_slab())? The readability looks better?
>

This will be:

do {
	if (mode == M_PARTIAL || mode == M_FULL)
		spin_unlock_irqrestore();

	[...]

} while (!cmpxchg_double_slab());

I think goto version is better for reading?

Thanks!

> Regards,
> Xiongwei
> 
> > +       }
> >
> > -       if (lock)
> > -               spin_unlock_irqrestore(&n->list_lock, flags);
> >
> > -       if (m == M_PARTIAL)
> > +       if (mode == M_PARTIAL) {
> > +               add_partial(n, slab, tail);
> > +               spin_unlock_irqrestore(&n->list_lock, flags);
> >                 stat(s, tail);
> > -       else if (m == M_FULL)
> > -               stat(s, DEACTIVATE_FULL);
> > -       else if (m == M_FREE) {
> > +       } else if (mode == M_FREE) {
> >                 stat(s, DEACTIVATE_EMPTY);
> >                 discard_slab(s, slab);
> >                 stat(s, FREE_SLAB);
> > -       }
> > +       } else if (mode == M_FULL) {
> > +               add_full(s, n, slab);
> > +               spin_unlock_irqrestore(&n->list_lock, flags);y
> > +               stat(s, DEACTIVATE_FULL);
> > +       } else if (mode == M_FULL_NOLIST)
> > +               stat(s, DEACTIVATE_FULL);
> >  }
> >
> >  #ifdef CONFIG_SLUB_CPU_PARTIAL
> > --
> > 2.33.1
> >
> >

-- 
Thank you, You are awesome!
Hyeonggon :-)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ