lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240529111904.2069608-1-david@redhat.com>
Date: Wed, 29 May 2024 13:18:58 +0200
From: David Hildenbrand <david@...hat.com>
To: linux-kernel@...r.kernel.org
Cc: linux-mm@...ck.org,
	David Hildenbrand <david@...hat.com>,
	Andrew Morton <akpm@...ux-foundation.org>,
	"Matthew Wilcox (Oracle)" <willy@...radead.org>,
	Mike Rapoport <rppt@...nel.org>,
	Minchan Kim <minchan@...nel.org>,
	Sergey Senozhatsky <senozhatsky@...omium.org>,
	Hyeonggon Yoo <42.hyeyoo@...il.com>
Subject: [PATCH v2 0/6] mm: page_type, zsmalloc and page_mapcount_reset()

Wanting to remove the remaining abuser of _mapcount/page_type along with
page_mapcount_reset(), I stumbled over zsmalloc, which is yet to be
converted away from "struct page" [1].

Unfortunately, we cannot stop using the page_type field in zsmalloc code
completely for its own purposes. All other fields in "struct page" are
used one way or the other. Could we simply store a 2-byte offset value
at the beginning of each page? Likely, but that will require a bit more
work; and once we have memdesc we might want to move the offset in there
(struct zsalloc?) again.

.. but we can limit the abuse to 16 bit, glue it to a page type that
must be set, and document it. page_has_type() will always successfully
indicate such zsmalloc pages, and such zsmalloc pages only.

We lose zsmalloc support for PAGE_SIZE > 64KB, which should be tolerable.
We could use more bits from the page type, but 16 bit sounds like a good
idea for now.

So clarify the _mapcount/page_type documentation, use a proper page_type
for zsmalloc, and remove page_mapcount_reset().

Briefly tested with zram on x86-64.

[1] https://lore.kernel.org/all/20231130101242.2590384-1-42.hyeyoo@gmail.com/

Cc: Andrew Morton <akpm@...ux-foundation.org>
Cc: "Matthew Wilcox (Oracle)" <willy@...radead.org>
Cc: Mike Rapoport <rppt@...nel.org>
Cc: Minchan Kim <minchan@...nel.org>
Cc: Sergey Senozhatsky <senozhatsky@...omium.org>
Cc: Hyeonggon Yoo <42.hyeyoo@...il.com>

v1 -> v2:
 * Rebased to mm/mm-unstable
 * "mm: update _mapcount and page_type documentation"
  -> Minor comment change
 * "mm: allow reuse of the lower 16 bit of the page type with an actual
    type"
  -> Fixup 18 vs 16 in description
  -> Reduce PAGE_TYPE_BASE to a single bit and hand-out bits from highest
     to lowest
  -> Adjust description

RFC -> v1:
 * Rebased to v6.10-rc1
 * "mm: update _mapcount and page_type documentation"
  -> Exchange members and fixup doc as suggested by Mike
 * "mm: allow reuse of the lower 16bit of the page type with an actual
    type"
  -> Remove "highest bit" comment, fixup PG_buddy, extend description
 * "mm/zsmalloc: use a proper page type"
  -> Add and use HAVE_ZSMALLOC to fixup compilcation
  -> Fixup BUILD_BUG_ON
  -> Add some VM_WARN_ON_ONCE(!PageZsmalloc(page));
 * "mm/mm_init: initialize page->_mapcount directly
    in __init_single_page()"
  -> Fixup patch subject

David Hildenbrand (6):
  mm: update _mapcount and page_type documentation
  mm: allow reuse of the lower 16 bit of the page type with an actual
    type
  mm/zsmalloc: use a proper page type
  mm/page_alloc: clear PageBuddy using __ClearPageBuddy() for bad pages
  mm/filemap: reinitialize folio->_mapcount directly
  mm/mm_init: initialize page->_mapcount directly in
    __init_single_page()

 drivers/block/zram/Kconfig |  1 +
 include/linux/mm.h         | 10 ----------
 include/linux/mm_types.h   | 33 ++++++++++++++++++++++-----------
 include/linux/page-flags.h | 25 ++++++++++++++++---------
 mm/Kconfig                 | 10 ++++++++--
 mm/filemap.c               |  2 +-
 mm/mm_init.c               |  2 +-
 mm/page_alloc.c            |  6 ++++--
 mm/zsmalloc.c              | 29 +++++++++++++++++++++++++----
 9 files changed, 78 insertions(+), 40 deletions(-)

-- 
2.45.1


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ