[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20260116-sheaves-for-all-v3-0-5595cb000772@suse.cz>
Date: Fri, 16 Jan 2026 15:40:20 +0100
From: Vlastimil Babka <vbabka@...e.cz>
To: Harry Yoo <harry.yoo@...cle.com>, Petr Tesarik <ptesarik@...e.com>,
Christoph Lameter <cl@...two.org>, David Rientjes <rientjes@...gle.com>,
Roman Gushchin <roman.gushchin@...ux.dev>
Cc: Hao Li <hao.li@...ux.dev>, Andrew Morton <akpm@...ux-foundation.org>,
Uladzislau Rezki <urezki@...il.com>,
"Liam R. Howlett" <Liam.Howlett@...cle.com>,
Suren Baghdasaryan <surenb@...gle.com>,
Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
Alexei Starovoitov <ast@...nel.org>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, linux-rt-devel@...ts.linux.dev,
bpf@...r.kernel.org, kasan-dev@...glegroups.com,
Vlastimil Babka <vbabka@...e.cz>, kernel test robot <oliver.sang@...el.com>,
stable@...r.kernel.org
Subject: [PATCH v3 00/21] slab: replace cpu (partial) slabs with sheaves
Percpu sheaves caching was introduced as opt-in but the goal was to
eventually move all caches to them. This is the next step, enabling
sheaves for all caches (except the two bootstrap ones) and then removing
the per cpu (partial) slabs and lots of associated code.
Besides (hopefully) improved performance, this removes the rather
complicated code related to the lockless fastpaths (using
this_cpu_try_cmpxchg128/64) and its complications with PREEMPT_RT or
kmalloc_nolock().
The lockless slab freelist+counters update operation using
try_cmpxchg128/64 remains and is crucial for freeing remote NUMA objects
without repeating the "alien" array flushing of SLUB, and to allow
flushing objects from sheaves to slabs mostly without the node
list_lock.
This v3 is the first non-RFC (for real). I plan to expose the series to
linux-next at this point. Because of the ongoing troubles with
kmalloc_nolock() that are solved with sheaves, I think it's worth aiming
for 7.0 if it passes linux-next testing.
Git branch for the v3
https://git.kernel.org/pub/scm/linux/kernel/git/vbabka/linux.git/log/?h=sheaves-for-all-v3
Which is a snapshot of:
https://git.kernel.org/pub/scm/linux/kernel/git/vbabka/linux.git/log/?h=b4/sheaves-for-all
Based on:
https://git.kernel.org/pub/scm/linux/kernel/git/vbabka/slab.git/log/?h=slab/for-7.0/sheaves
- includes a sheaves optimization that seemed minor but there was lkp
test robot result with significant improvements:
https://lore.kernel.org/all/202512291555.56ce2e53-lkp@intel.com/
(could be an uncommon corner case workload though)
- includes the kmalloc_nolock() fix commit a4ae75d1b6a2 that is undone
as part of this series
Significant (but not critical) remaining TODOs:
- Integration of rcu sheaves handling with kfree_rcu batching.
- Currently the kfree_rcu batching is almost completely bypassed. I'm
thinking it could be adjusted to handle rcu sheaves in addition to
individual objects, to get the best of both.
- Performance evaluation. Petr Tesarik has been doing that on the RFC
with some promising results (thanks!) and also found a memory leak.
Note that as many things, this caching scheme change is a tradeoff, as
summarized by Christoph:
https://lore.kernel.org/all/f7c33974-e520-387e-9e2f-1e523bfe1545@gentwo.org/
- Objects allocated from sheaves should have better temporal locality
(likely recently freed, thus cache hot) but worse spatial locality
(likely from many different slabs, increasing memory usage and
possibly TLB pressure on kernel's direct map).
Signed-off-by: Vlastimil Babka <vbabka@...e.cz>
---
Changes in v3:
- Rebase to current slab/for-7.0/sheaves which itself is rebased to
slab/for-next-fixes to include commit a4ae75d1b6a2 ("slab: fix
kmalloc_nolock() context check for PREEMPT_RT")
- Revert a4ae75d1b6a2 as part of "slab: simplify kmalloc_nolock()" as
it's no longer necessary.
- Add cache_has_sheaves() helper to test for s->sheaf_capacity, use it
in more places instead of s->cpu_sheaves tests that were missed
(Hao Li)
- Fix a bug where kmalloc_nolock() could end up trying to allocate empty
sheaf (not compatible with !allow_spin) in __pcs_replace_full_main()
(Hao Li)
- Fix missing inc_slabs_node() in ___slab_alloc() ->
alloc_from_new_slab() path. (Hao Li)
- Also a bug where refill_objects() -> alloc_from_new_slab ->
free_new_slab_nolock() (previously defer_deactivate_slab()) would
do inc_slabs_node() without matching dec_slabs_node()
- Make __free_slab call free_frozen_pages_nolock() when !allow_spin.
This was correct in the first RFC. (Hao Li)
- Add patch to make SLAB_CONSISTENCY_CHECKS prevent merging.
- Add tags from sveral people (thanks!)
- Fix checkpatch warnings.
- Link to v2: https://patch.msgid.link/20260112-sheaves-for-all-v2-0-98225cfb50cf@suse.cz
Changes in v2:
- Rebased to v6.19-rc1+slab.git slab/for-7.0/sheaves
- Some of the preliminary patches from the RFC went in there.
- Incorporate feedback/reports from many people (thanks!), including:
- Make caches with sheaves mergeable.
- Fix a major memory leak.
- Cleanup of stat items.
- Link to v1: https://patch.msgid.link/20251023-sheaves-for-all-v1-0-6ffa2c9941c0@suse.cz
---
Vlastimil Babka (21):
mm/slab: add rcu_barrier() to kvfree_rcu_barrier_on_cache()
slab: add SLAB_CONSISTENCY_CHECKS to SLAB_NEVER_MERGE
mm/slab: move and refactor __kmem_cache_alias()
mm/slab: make caches with sheaves mergeable
slab: add sheaves to most caches
slab: introduce percpu sheaves bootstrap
slab: make percpu sheaves compatible with kmalloc_nolock()/kfree_nolock()
slab: handle kmalloc sheaves bootstrap
slab: add optimized sheaf refill from partial list
slab: remove cpu (partial) slabs usage from allocation paths
slab: remove SLUB_CPU_PARTIAL
slab: remove the do_slab_free() fastpath
slab: remove defer_deactivate_slab()
slab: simplify kmalloc_nolock()
slab: remove struct kmem_cache_cpu
slab: remove unused PREEMPT_RT specific macros
slab: refill sheaves from all nodes
slab: update overview comments
slab: remove frozen slab checks from __slab_free()
mm/slub: remove DEACTIVATE_TO_* stat items
mm/slub: cleanup and repurpose some stat items
include/linux/slab.h | 6 -
mm/Kconfig | 11 -
mm/internal.h | 1 +
mm/page_alloc.c | 5 +
mm/slab.h | 53 +-
mm/slab_common.c | 61 +-
mm/slub.c | 2631 +++++++++++++++++---------------------------------
7 files changed, 972 insertions(+), 1796 deletions(-)
---
base-commit: aa2ab7f1e8dc9d27b9130054e48b0c6accddfcba
change-id: 20251002-sheaves-for-all-86ac13dc47a5
Best regards,
--
Vlastimil Babka <vbabka@...e.cz>
Powered by blists - more mailing lists