[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230609005158.2421285-1-surenb@google.com>
Date: Thu, 8 Jun 2023 17:51:52 -0700
From: Suren Baghdasaryan <surenb@...gle.com>
To: akpm@...ux-foundation.org
Cc: willy@...radead.org, hannes@...xchg.org, mhocko@...e.com,
josef@...icpanda.com, jack@...e.cz, ldufour@...ux.ibm.com,
laurent.dufour@...ibm.com, michel@...pinasse.org,
liam.howlett@...cle.com, jglisse@...gle.com, vbabka@...e.cz,
minchan@...gle.com, dave@...olabs.net, punit.agrawal@...edance.com,
lstoakes@...il.com, hdanton@...a.com, apopple@...dia.com,
peterx@...hat.com, ying.huang@...el.com, david@...hat.com,
yuzhao@...gle.com, dhowells@...hat.com, hughd@...gle.com,
viro@...iv.linux.org.uk, brauner@...nel.org,
pasha.tatashin@...een.com, surenb@...gle.com, linux-mm@...ck.org,
linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org,
kernel-team@...roid.com
Subject: [PATCH v2 0/6] Per-vma lock support for swap and userfaults
When per-vma locks were introduced in [1] several types of page faults
would still fall back to mmap_lock to keep the patchset simple. Among them
are swap and userfault pages. The main reason for skipping those cases was
the fact that mmap_lock could be dropped while handling these faults and
that required additional logic to be implemented.
Implement the mechanism to allow per-vma locks to be dropped for these
cases. When that happens handle_mm_fault returns new VM_FAULT_VMA_UNLOCKED
vm_fault_reason bit along with VM_FAULT_RETRY to indicate that VMA lock
was dropped. Naturally, once VMA lock is dropped that VMA should be
assumed unstable and can't be used.
Changes since v1 posted at [2]
- New patch 1/6 to remove do_poll parameter from read_swap_cache_async(),
per Huang Ying
- New patch 3/6 to separate VM_FAULT_COMPLETED addition,
per Alistair Popple
- Added comment for VM_FAULT_VMA_UNLOCKED in 4/6, per Alistair Popple
- New patch 6/6 to handle userfaults under VMA lock
Note: I tried implementing Matthew's suggestion in [3] to add vmf_end_read
but that gets quite messy since it would require changing code for every
architecture when we change handle_mm_fault interface.
Note: patch 4/6 will cause a trivial merge conflict in arch/arm64/mm/fault.c
when applied over mm-unstable branch due to a patch from ARM64 tree [4]
which is missing in mm-unstable.
[1] https://lore.kernel.org/all/20230227173632.3292573-1-surenb@google.com/
[2] https://lore.kernel.org/all/20230501175025.36233-1-surenb@google.com/
[3] https://lore.kernel.org/all/ZFEeHqzBJ6iOsRN+@casper.infradead.org/
[4] https://lore.kernel.org/all/20230524131305.2808-1-jszhang@kernel.org/
Suren Baghdasaryan (6):
swap: remove remnants of polling from read_swap_cache_async
mm: handle swap page faults under VMA lock if page is uncontended
mm: add missing VM_FAULT_RESULT_TRACE name for VM_FAULT_COMPLETED
mm: drop VMA lock before waiting for migration
mm: implement folio wait under VMA lock
mm: handle userfaults under VMA lock
arch/arm64/mm/fault.c | 3 ++-
arch/powerpc/mm/fault.c | 3 ++-
arch/s390/mm/fault.c | 3 ++-
arch/x86/mm/fault.c | 3 ++-
fs/userfaultfd.c | 42 ++++++++++++++++++-----------------
include/linux/mm_types.h | 7 +++++-
include/linux/pagemap.h | 14 ++++++++----
mm/filemap.c | 37 +++++++++++++++++++------------
mm/madvise.c | 4 ++--
mm/memory.c | 48 ++++++++++++++++++++++------------------
mm/swap.h | 1 -
mm/swap_state.c | 12 +++++-----
12 files changed, 103 insertions(+), 74 deletions(-)
--
2.41.0.162.gfafddb0af9-goog
Powered by blists - more mailing lists