lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 7 Jan 2022 12:08:15 +0100
From:   Vlastimil Babka <vbabka@...e.cz>
To:     Linus Torvalds <torvalds@...ux-foundation.org>
Cc:     David Rientjes <rientjes@...gle.com>,
        Joonsoo Kim <iamjoonsoo.kim@....com>,
        Christoph Lameter <cl@...ux.com>,
        Pekka Enberg <penberg@...nel.org>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Matthew Wilcox <willy@...radead.org>,
        "linux-mm@...ck.org" <linux-mm@...ck.org>,
        LKML <linux-kernel@...r.kernel.org>, patches@...ts.linux.dev,
        Joerg Roedel <joro@...tes.org>
Subject: [GIT PULL] slab for 5.17

Linus,

please pull the latest slab changes from

  git://git.kernel.org/pub/scm/linux/kernel/git/vbabka/slab.git tags/slab-for-5.17

======================================

- Separate struct slab from struct page - an offshot of the page
  folio work. Struct page fields used by slab allocators are
  moved from struct page to a new struct slab, that uses the
  same physical storage. Similar to struct folio, it always is a
  head page. This brings better type safety, separation of large
  kmalloc allocations from true slabs, and cleanup of related
  objcg code.

- A SLAB_MERGE_DEFAULT config optimization.

======================================

The struct slab series was originally based on v5.16-rc3 and in linux-next
since beginning of December [1]. I decided to rebase to v5.16-rc6 after Stephen
reported a conflict with mm made it to mainline [2].

If possible I would also like to finish the removal of slab fields from struct
page within this merge window. It now depends on commits in the iommu tree that
remove iommu's usage of the 'freelist' field. I currently have a branch
for-5.17/struct-slab-part2 [3] that merges the iommu 'core' branch after the
main struct slab series and adds a commit on top. I wonder if it's ok to send a
pull request with that structure later (assuming after both this pull request
and iommu tree are merged) or should it rather be rebased on your tree's commit
that does the later merge of those two?

Thanks,
Vlastimil

[1] https://lore.kernel.org/all/20211203073949.3d081406@canb.auug.org.au/
[2] https://lore.kernel.org/all/3ec33e65-1080-96be-f8bb-0012e3b87387@suse.cz/
[3] https://git.kernel.org/pub/scm/linux/kernel/git/vbabka/slab.git/log/?h=for-5.17/struct-slab-part2

----------------------------------------------------------------
Hyeonggon Yoo (2):
      mm: Make SLAB_MERGE_DEFAULT depend on SL[AU]B
      mm/slob: Remove unnecessary page_mapcount_reset() function call

Matthew Wilcox (Oracle) (14):
      mm: Split slab into its own type
      mm: Convert [un]account_slab_page() to struct slab
      mm: Convert virt_to_cache() to use struct slab
      mm: Convert __ksize() to struct slab
      mm: Use struct slab in kmem_obj_info()
      mm: Convert check_heap_object() to use struct slab
      mm/slub: Convert detached_freelist to use a struct slab
      mm/slub: Convert kfree() to use a struct slab
      mm/slub: Convert print_page_info() to print_slab_info()
      mm/slub: Convert pfmemalloc_match() to take a struct slab
      mm/slob: Convert SLOB to use struct slab and struct folio
      mm/kasan: Convert to struct folio and struct slab
      zsmalloc: Stop using slab fields in struct page
      bootmem: Use page->index instead of page->freelist

Vlastimil Babka (18):
      mm: add virt_to_folio() and folio_address()
      mm/slab: Dissolve slab_map_pages() in its caller
      mm/slub: Make object_err() static
      mm/slub: Convert __slab_lock() and __slab_unlock() to struct slab
      mm/slub: Convert alloc_slab_page() to return a struct slab
      mm/slub: Convert __free_slab() to use struct slab
      mm/slub: Convert most struct page to struct slab by spatch
      mm/slub: Finish struct page to struct slab conversion
      mm/slab: Convert kmem_getpages() and kmem_freepages() to struct slab
      mm/slab: Convert most struct page to struct slab by spatch
      mm/slab: Finish struct page to struct slab conversion
      mm: Convert struct page to struct slab in functions used by other subsystems
      mm/memcg: Convert slab objcgs from struct page to struct slab
      mm/kfence: Convert kfence_guarded_alloc() to struct slab
      mm/sl*b: Differentiate struct slab fields by sl*b implementations
      mm/slub: Simplify struct slab slabs field definition
      mm/slub: Define struct slab fields for CONFIG_SLUB_CPU_PARTIAL only when enabled
      Merge branch 'for-5.17/struct-slab' into for-linus

 arch/x86/mm/init_64.c        |    2 +-
 include/linux/bootmem_info.h |    2 +-
 include/linux/kasan.h        |    9 +-
 include/linux/memcontrol.h   |   48 --
 include/linux/mm.h           |   12 +
 include/linux/mm_types.h     |   10 +-
 include/linux/slab.h         |    8 -
 include/linux/slab_def.h     |   16 +-
 include/linux/slub_def.h     |   29 +-
 init/Kconfig                 |    1 +
 mm/bootmem_info.c            |    7 +-
 mm/kasan/common.c            |   27 +-
 mm/kasan/generic.c           |    8 +-
 mm/kasan/kasan.h             |    1 +
 mm/kasan/quarantine.c        |    2 +-
 mm/kasan/report.c            |   13 +-
 mm/kasan/report_tags.c       |   10 +-
 mm/kfence/core.c             |   17 +-
 mm/kfence/kfence_test.c      |    6 +-
 mm/memcontrol.c              |   55 +-
 mm/slab.c                    |  456 ++++++++--------
 mm/slab.h                    |  302 +++++++++--
 mm/slab_common.c             |   14 +-
 mm/slob.c                    |   62 +--
 mm/slub.c                    | 1177 +++++++++++++++++++++---------------------
 mm/sparse.c                  |    2 +-
 mm/usercopy.c                |   13 +-
 mm/zsmalloc.c                |   18 +-
 28 files changed, 1265 insertions(+), 1062 deletions(-)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ