[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20241226170710.1159679-1-surenb@google.com>
Date: Thu, 26 Dec 2024 09:06:52 -0800
From: Suren Baghdasaryan <surenb@...gle.com>
To: akpm@...ux-foundation.org
Cc: peterz@...radead.org, willy@...radead.org, liam.howlett@...cle.com,
lorenzo.stoakes@...cle.com, mhocko@...e.com, vbabka@...e.cz,
hannes@...xchg.org, mjguzik@...il.com, oliver.sang@...el.com,
mgorman@...hsingularity.net, david@...hat.com, peterx@...hat.com,
oleg@...hat.com, dave@...olabs.net, paulmck@...nel.org, brauner@...nel.org,
dhowells@...hat.com, hdanton@...a.com, hughd@...gle.com,
lokeshgidra@...gle.com, minchan@...gle.com, jannh@...gle.com,
shakeel.butt@...ux.dev, souravpanda@...gle.com, pasha.tatashin@...een.com,
klarasmodin@...il.com, corbet@....net, linux-doc@...r.kernel.org,
linux-mm@...ck.org, linux-kernel@...r.kernel.org, kernel-team@...roid.com,
surenb@...gle.com
Subject: [PATCH v7 00/17] move per-vma lock into vm_area_struct
Back when per-vma locks were introduces, vm_lock was moved out of
vm_area_struct in [1] because of the performance regression caused by
false cacheline sharing. Recent investigation [2] revealed that the
regressions is limited to a rather old Broadwell microarchitecture and
even there it can be mitigated by disabling adjacent cacheline
prefetching, see [3].
Splitting single logical structure into multiple ones leads to more
complicated management, extra pointer dereferences and overall less
maintainable code. When that split-away part is a lock, it complicates
things even further. With no performance benefits, there are no reasons
for this split. Merging the vm_lock back into vm_area_struct also allows
vm_area_struct to use SLAB_TYPESAFE_BY_RCU later in this patchset.
This patchset:
1. moves vm_lock back into vm_area_struct, aligning it at the cacheline
boundary and changing the cache to be cacheline-aligned to minimize
cacheline sharing;
2. changes vm_area_struct initialization to mark new vma as detached until
it is inserted into vma tree;
3. replaces vm_lock and vma->detached flag with a reference counter;
4. changes vm_area_struct cache to SLAB_TYPESAFE_BY_RCU to allow for their
reuse and to minimize call_rcu() calls.
Pagefault microbenchmarks show performance improvement:
Hmean faults/cpu-1 507926.5547 ( 0.00%) 506519.3692 * -0.28%*
Hmean faults/cpu-4 479119.7051 ( 0.00%) 481333.6802 * 0.46%*
Hmean faults/cpu-7 452880.2961 ( 0.00%) 455845.6211 * 0.65%*
Hmean faults/cpu-12 347639.1021 ( 0.00%) 352004.2254 * 1.26%*
Hmean faults/cpu-21 200061.2238 ( 0.00%) 229597.0317 * 14.76%*
Hmean faults/cpu-30 145251.2001 ( 0.00%) 164202.5067 * 13.05%*
Hmean faults/cpu-48 106848.4434 ( 0.00%) 120641.5504 * 12.91%*
Hmean faults/cpu-56 92472.3835 ( 0.00%) 103464.7916 * 11.89%*
Hmean faults/sec-1 507566.1468 ( 0.00%) 506139.0811 * -0.28%*
Hmean faults/sec-4 1880478.2402 ( 0.00%) 1886795.6329 * 0.34%*
Hmean faults/sec-7 3106394.3438 ( 0.00%) 3140550.7485 * 1.10%*
Hmean faults/sec-12 4061358.4795 ( 0.00%) 4112477.0206 * 1.26%*
Hmean faults/sec-21 3988619.1169 ( 0.00%) 4577747.1436 * 14.77%*
Hmean faults/sec-30 3909839.5449 ( 0.00%) 4311052.2787 * 10.26%*
Hmean faults/sec-48 4761108.4691 ( 0.00%) 5283790.5026 * 10.98%*
Hmean faults/sec-56 4885561.4590 ( 0.00%) 5415839.4045 * 10.85%*
Changes since v6 [4]
- Fixed vma_start_read_locked() condition in uffd_move_lock(),
per Lokesh Gidra
- Moved more conditions into unlikely() in vma_start_read(),
per Peter
- Renamed VMA_LOCK_LOCKED into VMA_LOCK_OFFSET, removed
VMA_STATE_{A|DE}TACHED, introduced VMA_REF_LIMIT, per Peter
- Made sure no re-attach or re-detach operation is happening, added
assertions to catch such cases, per Peter
- Added a parameter to vma_iter_store{_gfp} to indicate when a new vma is
being added or existing being modified, to avoid re-attaching existing vma
- Refactored patches to implement detached guarantees in the single patch
What I did not include in this patchset:
- Changing of vma locking patterns;
- Changing do_vmi_align_munmap() to avoid reattach_vmas()
This cleanup needs more discussion and can be done independently as this
patchset is already quite large.
Patchset applies over mm-unstable.
[1] https://lore.kernel.org/all/20230227173632.3292573-34-surenb@google.com/
[2] https://lore.kernel.org/all/ZsQyI%2F087V34JoIt@xsang-OptiPlex-9020/
[3] https://lore.kernel.org/all/CAJuCfpEisU8Lfe96AYJDZ+OM4NoPmnw9bP53cT_kbfP_pR+-2g@mail.gmail.com/
[4] https://lore.kernel.org/all/20241216192419.2970941-1-surenb@google.com/
Suren Baghdasaryan (17):
mm: introduce vma_start_read_locked{_nested} helpers
mm: move per-vma lock into vm_area_struct
mm: mark vma as detached until it's added into vma tree
mm: modify vma_iter_store{_gfp} to indicate if it's storing a new vma
mm: mark vmas detached upon exit
mm/nommu: fix the last places where vma is not locked before being
attached
types: move struct rcuwait into types.h
mm: allow vma_start_read_locked/vma_start_read_locked_nested to fail
mm: move mmap_init_lock() out of the header file
mm: uninline the main body of vma_start_write()
refcount: introduce __refcount_{add|inc}_not_zero_limited
mm: replace vm_lock and detached flag with a reference count
mm/debug: print vm_refcnt state when dumping the vma
mm: remove extra vma_numab_state_init() call
mm: prepare lock_vma_under_rcu() for vma reuse possibility
mm: make vma cache SLAB_TYPESAFE_BY_RCU
docs/mm: document latest changes to vm_lock
Documentation/mm/process_addrs.rst | 44 +++++----
include/linux/mm.h | 152 ++++++++++++++++++++++-------
include/linux/mm_types.h | 36 ++++---
include/linux/mmap_lock.h | 6 --
include/linux/rcuwait.h | 13 +--
include/linux/refcount.h | 20 +++-
include/linux/slab.h | 6 --
include/linux/types.h | 12 +++
kernel/fork.c | 87 +++++------------
mm/debug.c | 4 +-
mm/init-mm.c | 1 +
mm/memory.c | 85 +++++++++++++---
mm/mmap.c | 3 +-
mm/nommu.c | 6 +-
mm/userfaultfd.c | 31 +++---
mm/vma.c | 31 +++---
mm/vma.h | 13 ++-
tools/testing/vma/linux/atomic.h | 5 +
tools/testing/vma/vma_internal.h | 93 ++++++++----------
19 files changed, 385 insertions(+), 263 deletions(-)
base-commit: 431614f1580a03c1a653340c55ea76bd12a9403f
--
2.47.1.613.gc27f4b7a9f-goog
Powered by blists - more mailing lists