lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CA+fCnZfc5yhxkE+DQeOWcstH9P6g7T96eyCF4wzYXWNVfFrQ1A@mail.gmail.com>
Date: Fri, 13 Sep 2024 15:27:01 +0200
From: Andrey Konovalov <andreyknvl@...il.com>
To: Vlastimil Babka <vbabka@...e.cz>
Cc: Linux Memory Management List <linux-mm@...ck.org>, LKML <linux-kernel@...r.kernel.org>, 
	David Rientjes <rientjes@...gle.com>, Christoph Lameter <cl@...ux.com>, Hyeonggon Yoo <42.hyeyoo@...il.com>, 
	Imran Khan <imran.f.khan@...cle.com>
Subject: Re: Question about freeing of empty per-CPU partial slabs in SLUB

On Thu, Sep 12, 2024 at 10:34 AM Vlastimil Babka <vbabka@...e.cz> wrote:
>
> > "If the partial slab becomes an empty slab after freeing up the
> > object, it will be left in its current list if the number of partial
> > slabs for the concerned node is within the limits (i.e < slab cache’s
> > min_partial). This applies to both slabs belonging to a per-cpu
> > partial slab list and slabs belonging to a per-node partial slab list.
> > If the number of partial slabs are outside the limit (i.e >= slab
> > cache’s min partial) then the newly available empty slab is freed and
> > is removed from the corresponding partial slab list."
> >
> > The part that seems wrong to me here is the statement that this
> > applies to the per-CPU partial list. Based on the code in __slab_free,
> > it looks like it cannot reach the slab_empty label for a slab that is
> > on the per-CPU partial list.
> >
> > (I know that an empty per-CPU partial slab can be freed when the list
> > overflows or via shrinking, the question is about the slab being freed
> > directly by __slab_free.)
> >
> > Is the article wrong with regards to this case? Or did this behavior
> > change recently (I failed found any traces of this)?
>
> I don't think the behavior changed recently in this aspect, only in some
> other details like tracking on_node_partial with a page flag for better
> performance, and slabs on per-cpu partial list are no longer frozen.
>
> I think the paragraph you quoted can be interpreted together with this part
> of the preceding one: "However while putting this partial slab on a per-cpu
> partial slab list if it is found that the per-cpu partial slab list is
> already full, then all slabs from the per-cpu partial slab list are unfrozen
> i.e they are moved to a per-node partial slab list and this new partial slab
> is put on the per-cpu partial slab list."
>
> So while flushing the per-cpu partial list, the per-cpu partial slabs are
> moved to per-node partial list, and when __put_partials() finds some of them
> are empty, it applies the same s->min_partial threshold to decide whether to
> keep them in node partial list or free them. So in that sense "This applies
> to both..." part is correct, although as you say it cannot immediately
> affect a slab on partial list we are freeing to.

Ack, thank you for the response!

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ