lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <746087fd-993b-47b3-99e4-9bd4d3502e71@suse.cz>
Date: Wed, 17 Jul 2024 12:49:23 +0200
From: Vlastimil Babka <vbabka@...e.cz>
To: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: David Rientjes <rientjes@...gle.com>, Christoph Lameter <cl@...ux.com>,
 Joonsoo Kim <iamjoonsoo.kim@....com>, Pekka Enberg <penberg@...nel.org>,
 Andrew Morton <akpm@...ux-foundation.org>,
 "linux-mm@...ck.org" <linux-mm@...ck.org>,
 LKML <linux-kernel@...r.kernel.org>,
 Roman Gushchin <roman.gushchin@...ux.dev>,
 Hyeonggon Yoo <42.hyeyoo@...il.com>, Kees Cook <keescook@...omium.org>,
 Chengming Zhou <chengming.zhou@...ux.dev>
Subject: [GIT PULL] slab updates for 6.11

Hi Linus,

please pull the latest slab updates from:

  git://git.kernel.org/pub/scm/linux/kernel/git/vbabka/slab.git tags/slab-for-6.11

no merge conflicts with other trees expected.

Thanks,
Vlastimil

======================================

The most prominent change this time is the kmem_buckets based hardening of
kmalloc() allocations from Kees Cook. We have also extended the kmalloc()
alignment guarantees for non-power-of-two sizes in a way that benefits rust.
The rest are various cleanups and non-critical fixups.

- Dedicated bucket allocator (Kees Cook)

  This series [1] enhances the probabilistic defense against heap
  spraying/grooming of CONFIG_RANDOM_KMALLOC_CACHES from last year. kmalloc()
  users that are known to be useful for exploits can get completely separate
  set of kmalloc caches that can't be shared with other users. The first
  converted users are alloc_msg() and memdup_user(). The hardening is enabled by
  CONFIG_SLAB_BUCKETS.

- Extended kmalloc() alignment guarantees (Vlastimil Babka)

  For years now we have guaranteed natural alignment for power-of-two
  allocations, but nothing was defined for other sizes (in practice, we have
  two such buckets, kmalloc-96 and kmalloc-192). To avoid unnecessary padding
  in the rust layer due to its alignment rules, extend the guarantee so that
  the alignment is at least the largest power-of-two divisor of the requested
  size. This fits what rust needs, is a superset of the existing power-of-two
  guarantee, and does not in practice change the layout (and thus does not add
  overhead due to padding) of the kmalloc-96 and kmalloc-192 caches, unless slab
  debugging is enabled for them.

- Cleanups and non-critical fixups (Chengming Zhou, Suren Baghdasaryan, Matthew
  Willcox, Alex Shi, Vlastimil Babka)

  Various tweaks related to the new alloc profiling code, folio conversion,
  debugging and more leftovers after SLAB.

[1] https://lore.kernel.org/all/20240701190152.it.631-kees@kernel.org/

----------------------------------------------------------------
Alex Shi (Tencent) (1):
      mm/memcg: alignment memcg_data define condition

Chengming Zhou (3):
      slab: make check_object() more consistent
      slab: don't put freepointer outside of object if only orig_size
      slab: delete useless RED_INACTIVE and RED_ACTIVE

Kees Cook (6):
      mm/slab: Introduce kmem_buckets typedef
      mm/slab: Plumb kmem_buckets into __do_kmalloc_node()
      mm/slab: Introduce kvmalloc_buckets_node() that can take kmem_buckets argument
      mm/slab: Introduce kmem_buckets_create() and family
      ipc, msg: Use dedicated slab buckets for alloc_msg()
      mm/util: Use dedicated slab buckets for memdup_user()

Matthew Wilcox (Oracle) (1):
      mm: Reduce the number of slab->folio casts

Suren Baghdasaryan (2):
      mm, slab: move allocation tagging code in the alloc path into a hook
      mm, slab: move prepare_slab_obj_exts_hook under CONFIG_MEM_ALLOC_PROFILING

Vlastimil Babka (3):
      mm, slab: don't wrap internal functions with alloc_hooks()
      slab, rust: extend kmalloc() alignment guarantees to remove Rust padding
      Merge branch 'slab/for-6.11/buckets' into slab/for-next

 Documentation/core-api/memory-allocation.rst |   6 +-
 include/linux/mm.h                           |   6 +-
 include/linux/mm_types.h                     |   9 +-
 include/linux/poison.h                       |   7 +-
 include/linux/slab.h                         |  97 +++++++++----
 ipc/msgutil.c                                |  13 +-
 kernel/configs/hardening.config              |   1 +
 lib/fortify_kunit.c                          |   2 -
 lib/slub_kunit.c                             |   2 +-
 mm/Kconfig                                   |  17 +++
 mm/slab.h                                    |  14 +-
 mm/slab_common.c                             | 111 +++++++++++++-
 mm/slub.c                                    | 209 +++++++++++++++------------
 mm/util.c                                    |  23 ++-
 rust/kernel/alloc/allocator.rs               |  19 +--
 scripts/kernel-doc                           |   1 +
 tools/include/linux/poison.h                 |   7 +-
 17 files changed, 369 insertions(+), 175 deletions(-)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ