lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <a0511a72-711b-4c8f-b9d7-da95681000c1@suse.cz>
Date: Fri, 5 Jan 2024 10:36:08 +0100
From: Vlastimil Babka <vbabka@...e.cz>
To: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: David Rientjes <rientjes@...gle.com>, Joonsoo Kim
 <iamjoonsoo.kim@....com>, Christoph Lameter <cl@...ux.com>,
 Pekka Enberg <penberg@...nel.org>, Andrew Morton
 <akpm@...ux-foundation.org>, "linux-mm@...ck.org" <linux-mm@...ck.org>,
 LKML <linux-kernel@...r.kernel.org>, patches@...ts.linux.dev,
 Roman Gushchin <roman.gushchin@...ux.dev>,
 Hyeonggon Yoo <42.hyeyoo@...il.com>,
 Chengming Zhou <chengming.zhou@...ux.dev>,
 Stephen Rothwell <sfr@...b.auug.org.au>
Subject: [GIT PULL] slab updates for 6.8

Hi Linus,

once the merge window opens, please pull the latest slab updates from:

  git://git.kernel.org/pub/scm/linux/kernel/git/vbabka/slab.git tags/slab-for-6.8

there are more conflicts (with mm tree) than usual this time, but I believe
it's a one-off situation due to a bunch of code being deleted or shuffled due
to the SLAB removal.

Stephen's -next resolutions (merging slab-next after mm)
https://lore.kernel.org/all/20240102150224.3c091932@canb.auug.org.au/
https://lore.kernel.org/all/20240102151332.48a87d86@canb.auug.org.au/
https://lore.kernel.org/all/20240102153438.5b29f8c5@canb.auug.org.au/

Only the last one is more involved as changes to __kmalloc_large_node() and
free_large_kmalloc() in mm/slab_common.c from mm tree need to be replicated in
mm/slub.c

I have tried the opposite direction (mm after slab) and it was basically the
same. Parking this slab PR until mm PR's is certainly an option.

Thanks,
Vlastimil

======================================

- SLUB: delayed freezing of CPU partial slabs (Chengming Zhou)

  Freezing is an operation involving double_cmpxchg() that makes a slab
  exclusive for a particular CPU. Chengming noticed that we use it also in
  situations where we are not yet installing the slab as the CPU slab, because
  freezing also indicates that the slab is not on the shared list. This
  results in redundant freeze/unfreeze operation and can be avoided by marking
  separately the shared list presence by reusing the PG_workingset flag.

  This approach neatly avoids the issues described in 9b1ea29bc0d7 ("Revert
  "mm, slub: consider rest of partial list if acquire_slab() fails"") as we can
  now grab a slab from the shared list in a quick and guaranteed way without
  the cmpxchg_double() operation that amplifies the lock contention and can fail.

  As a result, lkp has reported 34.2% improvement of stress-ng.rawudp.ops_per_sec

- SLAB removal and SLUB cleanups (Vlastimil Babka)

  The SLAB allocator has been deprecated since 6.5 and nobody has objected so
  far. We agreed at LSF/MM to wait until the next LTS, which is 6.6, so we
  should be good to go now.

  This doesn't yet erase all traces of SLAB outside of mm/ so some dead code,
  comments or documentation remain, and will be cleaned up gradually (some
  series are already in the works).

  Removing the choice of allocators has already allowed to simplify and
  optimize the code wiring up the kmalloc APIs to the SLUB implementation.

----------------------------------------------------------------
Chengming Zhou (9):
      slub: Reflow ___slab_alloc()
      slub: Change get_partial() interfaces to return slab
      slub: Keep track of whether slub is on the per-node partial list
      slub: Prepare __slab_free() for unfrozen partial slab out of node partial list
      slub: Introduce freeze_slab()
      slub: Delay freezing of partial slabs
      slub: Optimize deactivate_slab()
      slub: Rename all *unfreeze_partials* functions to *put_partials*
      slub: Update frozen slabs documentations in the source

Vlastimil Babka (26):
      mm/slab, docs: switch mm-api docs generation from slab.c to slub.c
      mm/slab: remove CONFIG_SLAB from all Kconfig and Makefile
      KASAN: remove code paths guarded by CONFIG_SLAB
      KFENCE: cleanup kfence_guarded_alloc() after CONFIG_SLAB removal
      mm/memcontrol: remove CONFIG_SLAB #ifdef guards
      cpu/hotplug: remove CPUHP_SLAB_PREPARE hooks
      mm/slab: remove CONFIG_SLAB code from slab common code
      mm/mempool/dmapool: remove CONFIG_DEBUG_SLAB ifdefs
      mm/slab: remove mm/slab.c and slab_def.h
      mm/slab: move struct kmem_cache_cpu declaration to slub.c
      mm/slab: move the rest of slub_def.h to mm/slab.h
      mm/slab: consolidate includes in the internal mm/slab.h
      mm/slab: move pre/post-alloc hooks from slab.h to slub.c
      mm/slab: move memcg related functions from slab.h to slub.c
      mm/slab: move struct kmem_cache_node from slab.h to slub.c
      mm/slab: move kfree() from slab_common.c to slub.c
      mm/slab: move kmalloc_slab() to mm/slab.h
      mm/slab: move kmalloc() functions from slab_common.c to slub.c
      mm/slub: remove slab_alloc() and __kmem_cache_alloc_lru() wrappers
      mm/slub: optimize alloc fastpath code layout
      mm/slub: optimize free fast path code layout
      mm/slub: fix bulk alloc and free stats
      mm/slub: introduce __kmem_cache_free_bulk() without free hooks
      mm/slub: handle bulk and single object freeing separately
      mm/slub: free KFENCE objects in slab_free_hook()
      Merge branch 'slab/for-6.8/slub-hook-cleanups' into slab/for-next

 CREDITS                           |   12 +-
 Documentation/core-api/mm-api.rst |    2 +-
 arch/arm64/Kconfig                |    2 +-
 arch/s390/Kconfig                 |    2 +-
 arch/x86/Kconfig                  |    2 +-
 include/linux/cpuhotplug.h        |    1 -
 include/linux/slab.h              |   22 +-
 include/linux/slab_def.h          |  124 --
 include/linux/slub_def.h          |  204 --
 kernel/cpu.c                      |    5 -
 lib/Kconfig.debug                 |    1 -
 lib/Kconfig.kasan                 |   11 +-
 lib/Kconfig.kfence                |    2 +-
 lib/Kconfig.kmsan                 |    2 +-
 mm/Kconfig                        |   68 +-
 mm/Kconfig.debug                  |   16 +-
 mm/Makefile                       |    6 +-
 mm/dmapool.c                      |    2 +-
 mm/kasan/common.c                 |   13 +-
 mm/kasan/kasan.h                  |    3 +-
 mm/kasan/quarantine.c             |    7 -
 mm/kasan/report.c                 |    1 +
 mm/kfence/core.c                  |    4 -
 mm/memcontrol.c                   |    6 +-
 mm/mempool.c                      |    6 +-
 mm/slab.c                         | 4026 -------------------------------------
 mm/slab.h                         |  551 ++---
 mm/slab_common.c                  |  231 +--
 mm/slub.c                         | 1137 ++++++++---
 29 files changed, 1094 insertions(+), 5375 deletions(-)
 delete mode 100644 include/linux/slab_def.h
 delete mode 100644 include/linux/slub_def.h
 delete mode 100644 mm/slab.c

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ