[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <lr2nridih62djx5ccdijiyacdz2hrubsh52tj6bivr6yfgajsj@mgziscqwlmtp>
Date: Thu, 24 Apr 2025 12:28:37 +0100
From: Pedro Falcato <pfalcato@...e.de>
To: Harry Yoo <harry.yoo@...cle.com>
Cc: Vlastimil Babka <vbabka@...e.cz>, Christoph Lameter <cl@...two.org>,
David Rientjes <rientjes@...gle.com>, Andrew Morton <akpm@...ux-foundation.org>,
Dennis Zhou <dennis@...nel.org>, Tejun Heo <tj@...nel.org>, Mateusz Guzik <mjguzik@...il.com>,
Jamal Hadi Salim <jhs@...atatu.com>, Cong Wang <xiyou.wangcong@...il.com>,
Jiri Pirko <jiri@...nulli.us>, Vlad Buslov <vladbu@...dia.com>,
Yevgeny Kliteynik <kliteyn@...dia.com>, Jan Kara <jack@...e.cz>, Byungchul Park <byungchul@...com>,
linux-mm@...ck.org, netdev@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [RFC PATCH 0/7] Reviving the slab destructor to tackle the
percpu allocator scalability problem
On Thu, Apr 24, 2025 at 05:07:48PM +0900, Harry Yoo wrote:
> Overview
> ========
>
> The slab destructor feature existed in early days of slab allocator(s).
> It was removed by the commit c59def9f222d ("Slab allocators: Drop support
> for destructors") in 2007 due to lack of serious use cases at that time.
>
> Eighteen years later, Mateusz Guzik proposed [1] re-introducing a slab
> constructor/destructor pair to mitigate the global serialization point
> (pcpu_alloc_mutex) that occurs when each slab object allocates and frees
> percpu memory during its lifetime.
>
> Consider mm_struct: it allocates two percpu regions (mm_cid and rss_stat),
> so each allocate–free cycle requires two expensive acquire/release on
> that mutex.
>
> We can mitigate this contention by retaining the percpu regions after
> the object is freed and releasing them only when the backing slab pages
> are freed.
>
> How to do this with slab constructors and destructors: the constructor
> allocates percpu memory, and the destructor frees it when the slab pages
> are reclaimed; this slightly alters the constructor’s semantics,
> as it can now fail.
>
I really really really really don't like this. We're opening a pandora's box
of locking issues for slab deadlocks and other subtle issues. IMO the best
solution there would be, what, failing dtors? which says a lot about the whole
situation...
Case in point:
What happens if you allocate a slab and start ->ctor()-ing objects, and then
one of the ctors fails? We need to free the ctor, but not without ->dtor()-ing
everything back (AIUI this is not handled in this series, yet). Besides this
complication, if failing dtors were added into the mix, we'd be left with a
half-initialized slab(!!) in the middle of the cache waiting to get freed,
without being able to.
Then there are obviously other problems like: whatever you're calling must
not ever require the slab allocator (directly or indirectly) and must not
do direct reclaim (ever!), at the risk of a deadlock. The pcpu allocator
is a no-go (AIUI!) already because of such issues.
Then there's the separate (but adjacent, particularly as we're considering
this series due to performance improvements) issue that the ctor() and
dtor() interfaces are terrible, in the sense that they do not let you batch
in any way shape or form (requiring us to lock/unlock many times, allocate
many times, etc). If this is done for performance improvements, I would prefer
a superior ctor/dtor interface that takes something like a slab iterator and
lets you do these things.
The ghost of 1992 Solaris still haunts us...
> This series is functional (although not compatible with MM debug
> features yet), but still far from perfect. I’m actively refining it and
> would appreciate early feedback before I improve it further. :)
>
> This series is based on slab/for-next [2].
>
> Performance Improvement
> =======================
>
> I measured the benefit of this series for two different users:
> exec() and tc filter insertion/removal.
>
> exec() throughput
> -----------------
>
> The performance of exec() is important when short-lived processes are
> frequently created. For example: shell-heavy workloads and running many
> test cases [3].
>
> I measured exec() throughput with a microbenchmark:
> - 33% of exec() throughput gain on 2-socket machine with 192 CPUs,
> - 4.56% gain on a desktop with 24 hardware threads, and
> - Even 4% gain on a single-threaded exec() throughput.
>
> Further investigation showed that this was due to the overhead of
> acquiring/releasing pcpu_alloc_mutex and its contention.
>
> See patch 7 for more detail on the experiment.
>
> Traffic Filter Insertion and Removal
> ------------------------------------
>
> Each tc filter allocates three percpu memory regions per tc_action object,
> so frequently inserting and removing filters contend heavily on the same
> mutex.
>
> In the Linux-kernel tools/testing tc-filter benchmark (see patch 4 for
> more detail), I observed a 26% reduction in system time and observed
> much less contention on pcpu_alloc_mutex with this series.
>
> I saw in old mailing list threads Mellanox (now NVIDIA) engineers cared
> about tc filter insertion rate; these changes may still benefit
> workloads they run today.
>
The performance improvements are obviously fantastic, but I do wonder
if things could be fixed by just fixing the underlying problems, instead
of tapering over them with slab allocator magic and dubious object lifecycles.
In this case, the big issue is that the pcpu allocator does not scale well.
--
Pedro
Powered by blists - more mailing lists