[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20221105200646.wmfilka6prusrb56@box.shutemov.name>
Date: Sat, 5 Nov 2022 23:06:46 +0300
From: "Kirill A. Shutemov" <kirill@...temov.name>
To: Hugh Dickins <hughd@...gle.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Matthew Wilcox <willy@...radead.org>,
David Hildenbrand <david@...hat.com>,
Vlastimil Babka <vbabka@...e.cz>, Peter Xu <peterx@...hat.com>,
Yang Shi <shy828301@...il.com>,
John Hubbard <jhubbard@...dia.com>,
Mike Kravetz <mike.kravetz@...cle.com>,
Sidhartha Kumar <sidhartha.kumar@...cle.com>,
Muchun Song <songmuchun@...edance.com>,
Miaohe Lin <linmiaohe@...wei.com>,
Naoya Horiguchi <naoya.horiguchi@...ux.dev>,
Mina Almasry <almasrymina@...gle.com>,
James Houghton <jthoughton@...gle.com>,
Zach O'Keefe <zokeefe@...gle.com>,
linux-kernel@...r.kernel.org, linux-mm@...ck.org
Subject: Re: [PATCH 3/3] mm,thp,rmap: lock_compound_mapcounts() on THP
mapcounts
On Wed, Nov 02, 2022 at 06:53:45PM -0700, Hugh Dickins wrote:
> Fix the races in maintaining compound_mapcount, subpages_mapcount and
> subpage _mapcount by using PG_locked in the first tail of any compound
> page for a bit_spin_lock() on such modifications; skipping the usual
> atomic operations on those fields in this case.
>
> Bring page_remove_file_rmap() and page_remove_anon_compound_rmap()
> back into page_remove_rmap() itself. Rearrange page_add_anon_rmap()
> and page_add_file_rmap() and page_remove_rmap() to follow the same
> "if (compound) {lock} else if (PageCompound) {lock} else {atomic}"
> pattern (with a PageTransHuge in the compound test, like before, to
> avoid BUG_ONs and optimize away that block when THP is not configured).
> Move all the stats updates outside, after the bit_spin_locked section,
> so that it is sure to be a leaf lock.
>
> Add page_dup_compound_rmap() to manage compound locking versus atomics
> in sync with the rest. In particular, hugetlb pages are still using
> the atomics: to avoid unnecessary interference there, and because they
> never have subpage mappings; but this exception can easily be changed.
> Conveniently, page_dup_compound_rmap() turns out to suit an anon THP's
> __split_huge_pmd_locked() too.
>
> bit_spin_lock() is not popular with PREEMPT_RT folks: but PREEMPT_RT
> sensibly excludes TRANSPARENT_HUGEPAGE already, so its only exposure
> is to the non-hugetlb non-THP pte-mapped compound pages (with large
> folios being currently dependent on TRANSPARENT_HUGEPAGE). There is
> never any scan of subpages in this case; but we have chosen to use
> PageCompound tests rather than PageTransCompound tests to gate the
> use of lock_compound_mapcounts(), so that page_mapped() is correct on
> all compound pages, whether or not TRANSPARENT_HUGEPAGE is enabled:
> could that be a problem for PREEMPT_RT, when there is contention on
> the lock - under heavy concurrent forking for example? If so, then it
> can be turned into a sleeping lock (like folio_lock()) when PREEMPT_RT.
>
> A simple 100 X munmap(mmap(2GB, MAP_SHARED|MAP_POPULATE, tmpfs), 2GB)
> took 18 seconds on small pages, and used to take 1 second on huge pages,
> but now takes 115 milliseconds on huge pages. Mapping by pmds a second
> time used to take 860ms and now takes 86ms; mapping by pmds after mapping
> by ptes (when the scan is needed) used to take 870ms and now takes 495ms.
> Mapping huge pages by ptes is largely unaffected but variable: between 5%
> faster and 5% slower in what I've recorded. Contention on the lock is
> likely to behave worse than contention on the atomics behaved.
>
> Signed-off-by: Hugh Dickins <hughd@...gle.com>
Acked-by: Kirill A. Shutemov <kirill.shutemov@...ux.intel.com>
--
Kiryl Shutsemau / Kirill A. Shutemov
Powered by blists - more mailing lists