lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <c71a884d-714f-4741-906f-4df162bde303@suse.cz>
Date: Thu, 12 Sep 2024 10:34:33 +0200
From: Vlastimil Babka <vbabka@...e.cz>
To: Andrey Konovalov <andreyknvl@...il.com>
Cc: Linux Memory Management List <linux-mm@...ck.org>,
 LKML <linux-kernel@...r.kernel.org>, David Rientjes <rientjes@...gle.com>,
 Christoph Lameter <cl@...ux.com>, Hyeonggon Yoo <42.hyeyoo@...il.com>,
 Imran Khan <imran.f.khan@...cle.com>
Subject: Re: Question about freeing of empty per-CPU partial slabs in SLUB

On 9/10/24 18:38, Andrey Konovalov wrote:
> Hi Vlastimil

Hi!

> (and other SLUB maintainers),

you didn't CC them, so doing it at least for the active ones...

> I have a question about freeing of empty per-CPU partial slabs in the
> SLUB allocator.
> 
> The "Linux SLUB Allocator Internals and Debugging" article [1] states:

And we can Cc Imran too :)

> "If the partial slab becomes an empty slab after freeing up the
> object, it will be left in its current list if the number of partial
> slabs for the concerned node is within the limits (i.e < slab cache’s
> min_partial). This applies to both slabs belonging to a per-cpu
> partial slab list and slabs belonging to a per-node partial slab list.
> If the number of partial slabs are outside the limit (i.e >= slab
> cache’s min partial) then the newly available empty slab is freed and
> is removed from the corresponding partial slab list."
> 
> The part that seems wrong to me here is the statement that this
> applies to the per-CPU partial list. Based on the code in __slab_free,
> it looks like it cannot reach the slab_empty label for a slab that is
> on the per-CPU partial list.
> 
> (I know that an empty per-CPU partial slab can be freed when the list
> overflows or via shrinking, the question is about the slab being freed
> directly by __slab_free.)
> 
> Is the article wrong with regards to this case? Or did this behavior
> change recently (I failed found any traces of this)?

I don't think the behavior changed recently in this aspect, only in some
other details like tracking on_node_partial with a page flag for better
performance, and slabs on per-cpu partial list are no longer frozen.

I think the paragraph you quoted can be interpreted together with this part
of the preceding one: "However while putting this partial slab on a per-cpu
partial slab list if it is found that the per-cpu partial slab list is
already full, then all slabs from the per-cpu partial slab list are unfrozen
i.e they are moved to a per-node partial slab list and this new partial slab
is put on the per-cpu partial slab list."

So while flushing the per-cpu partial list, the per-cpu partial slabs are
moved to per-node partial list, and when __put_partials() finds some of them
are empty, it applies the same s->min_partial threshold to decide whether to
keep them in node partial list or free them. So in that sense "This applies
to both..." part is correct, although as you say it cannot immediately
affect a slab on partial list we are freeing to.

> Other than this statement, the article seems to be correct about all
> other small details that I looked into, so I'm not sure whether my
> understanding of the code is wrong of the article is.

Yeah I like the articles too. Checking the code as well is a good strategy
as it may always evolve further since publishing :)

> I hope you could clarify this.

Hope that helps!
Vlastimil

> Thank you!
> 
> [1] https://blogs.oracle.com/linux/post/linux-slub-allocator-internals-and-debugging-1


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ