[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <2f7985a8-0460-42de-9af0-4f966b937695@suse.cz>
Date: Thu, 20 Mar 2025 14:06:42 +0100
From: Vlastimil Babka <vbabka@...e.cz>
To: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: David Rientjes <rientjes@...gle.com>, Christoph Lameter <cl@...ux.com>,
Andrew Morton <akpm@...ux-foundation.org>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
LKML <linux-kernel@...r.kernel.org>,
Roman Gushchin <roman.gushchin@...ux.dev>,
Hyeonggon Yoo <42.hyeyoo@...il.com>,
"Uladzislau Rezki (Sony)" <urezki@...il.com>, RCU <rcu@...r.kernel.org>
Subject: [GIT PULL] slab updates for 6.15
Hi Linus,
please pull the latest slab updates from:
git://git.kernel.org/pub/scm/linux/kernel/git/vbabka/slab.git tags/slab-for-6.15
There's a small conflict with the rcu tree:
https://lore.kernel.org/all/20250212150941.5e4fa1c9@canb.auug.org.au/
Thanks,
Vlastimil
======================================
* Move the TINY_RCU kvfree_rcu() implementation from RCU to SLAB
subsystem and cleanup its integration (Vlastimil Babka)
Following the move of the TREE_RCU batching kvfree_rcu() implementation in
6.14, move also the simpler TINY_RCU variant. Refactor the #ifdef guards
so that the simple implementation is also used with SLUB_TINY. Remove the
need for RCU to recognize fake callback function pointers
(__is_kvfree_rcu_offset()) when handling call_rcu() by implementing a
callback that calculates the object's address from the embedded rcu_head
address without knowing its offset.
* Improve kmalloc cache randomization in kvmalloc (GONG Ruiqi)
Due to an extra layer of function call, all kvmalloc() allocations used the
same set of random caches. Thanks to moving the kvmalloc() implementation to
slub.c, this is improved and randomization now works for kvmalloc.
* Various improvements to debugging, testing and other cleanups (Hyesoo Yu,
Lilith Gkini, Uladzislau Rezki, Matthew Wilcox, Kevin Brodsky, Ye Bin)
----------------------------------------------------------------
GONG Ruiqi (2):
slab: Adjust placement of __kvmalloc_node_noprof
slab: Achieve better kmalloc caches randomization in kvmalloc
Hyesoo Yu (2):
mm: slub: Print the broken data before restoring them
mm: slub: call WARN() when detecting a slab corruption
Kevin Brodsky (1):
mm/slab: simplify SLAB_* flag handling
Lilith Gkini (1):
slub: Handle freelist cycle in on_freelist()
Matthew Wilcox (Oracle) (1):
slab: Mark large folios for debugging purposes
Uladzislau Rezki (Sony) (1):
kunit, slub: Add test_kfree_rcu_wq_destroy use case
Vlastimil Babka (6):
slab, rcu: move TINY_RCU variant of kvfree_rcu() to SLAB
rcu: remove trace_rcu_kvfree_callback
rcu, slab: use a regular callback function for kvfree_rcu
slab: don't batch kvfree_rcu() with SLUB_TINY
mm, slab: cleanup slab_bug() parameters
Merge branch 'slab/for-6.15/kfree_rcu_tiny' into slab/for-next
Ye Bin (1):
mm/slab: call kmalloc_noprof() unconditionally in kmalloc_array_noprof()
include/linux/page-flags.h | 18 +--
include/linux/rcupdate.h | 33 +++--
include/linux/rcutiny.h | 36 -----
include/linux/rcutree.h | 3 -
include/linux/slab.h | 16 ++-
include/trace/events/rcu.h | 34 -----
kernel/rcu/tiny.c | 25 ----
kernel/rcu/tree.c | 9 +-
lib/slub_kunit.c | 59 ++++++++
mm/Kconfig | 4 +
mm/slab.h | 34 +----
mm/slab_common.c | 44 ++++--
mm/slub.c | 336 ++++++++++++++++++++++++++++++++++++++-------
mm/util.c | 162 ----------------------
14 files changed, 430 insertions(+), 383 deletions(-)
Powered by blists - more mailing lists