[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20220126183429.1840447-1-pasha.tatashin@soleen.com>
Date: Wed, 26 Jan 2022 18:34:20 +0000
From: Pasha Tatashin <pasha.tatashin@...een.com>
To: pasha.tatashin@...een.com, linux-kernel@...r.kernel.org,
linux-mm@...ck.org, linux-m68k@...ts.linux-m68k.org,
anshuman.khandual@....com, willy@...radead.org,
akpm@...ux-foundation.org, william.kucharski@...cle.com,
mike.kravetz@...cle.com, vbabka@...e.cz, geert@...ux-m68k.org,
schmitzmic@...il.com, rostedt@...dmis.org, mingo@...hat.com,
hannes@...xchg.org, guro@...com, songmuchun@...edance.com,
weixugc@...gle.com, gthelen@...gle.com, rientjes@...gle.com,
pjt@...gle.com, hughd@...gle.com
Subject: [PATCH v3 0/9] Hardening page _refcount
Changelog:
v3:
- Sync with the latest linux-next
v2:
- As suggested by Matthew Wilcox removed "mm: page_ref_add_unless()
does not trace 'u' argument" patch as page_ref_add_unless is going
away.
v1:
- sync with the latest linux-next
RFCv2:
- use the "fetch" variant instead of "return" of atomic instructions
- allow negative values, as we are using all 32-bits of _refcount.
It is hard to root cause _refcount problems, because they usually
manifest after the damage has occurred. Yet, they can lead to
catastrophic failures such memory corruptions. There were a number
of refcount related issues discovered recently [1], [2], [3].
Improve debugability by adding more checks that ensure that
page->_refcount never turns negative (i.e. double free does not
happen, or free after freeze etc).
- Check for overflow and underflow right from the functions that
modify _refcount
- Remove set_page_count(), so we do not unconditionally overwrite
_refcount with an unrestrained value
- Trace return values in all functions that modify _refcount
Applies against next-20220125.
Previous verions:
v2: https://lore.kernel.org/all/20211221150140.988298-1-pasha.tatashin@soleen.com
v1: https://lore.kernel.org/all/20211208203544.2297121-1-pasha.tatashin@soleen.com
RFCv2: https://lore.kernel.org/all/20211117012059.141450-1-pasha.tatashin@soleen.com
RFCv1: https://lore.kernel.org/all/20211026173822.502506-1-pasha.tatashin@soleen.com
[1] https://lore.kernel.org/all/xr9335nxwc5y.fsf@gthelen2.svl.corp.google.com
[2] https://lore.kernel.org/all/1582661774-30925-2-git-send-email-akaher@vmware.com
[3] https://lore.kernel.org/all/20210622021423.154662-3-mike.kravetz@oracle.com
Pasha Tatashin (9):
mm: add overflow and underflow checks for page->_refcount
mm: Avoid using set_page_count() in set_page_recounted()
mm: remove set_page_count() from page_frag_alloc_align
mm: avoid using set_page_count() when pages are freed into allocator
mm: rename init_page_count() -> page_ref_init()
mm: remove set_page_count()
mm: simplify page_ref_* functions
mm: do not use atomic_set_release in page_ref_unfreeze()
mm: use atomic_cmpxchg_acquire in page_ref_freeze().
arch/m68k/mm/motorola.c | 2 +-
include/linux/mm.h | 2 +-
include/linux/page_ref.h | 149 +++++++++++++++-----------------
include/trace/events/page_ref.h | 58 ++++++++-----
mm/debug_page_ref.c | 22 +----
mm/internal.h | 6 +-
mm/page_alloc.c | 19 ++--
7 files changed, 132 insertions(+), 126 deletions(-)
--
2.35.0.rc0.227.g00780c9af4-goog
Powered by blists - more mailing lists