[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20260127192936.1250096-2-rppt@kernel.org>
Date: Tue, 27 Jan 2026 21:29:20 +0200
From: Mike Rapoport <rppt@...nel.org>
To: linux-mm@...ck.org
Cc: Andrea Arcangeli <aarcange@...hat.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Axel Rasmussen <axelrasmussen@...gle.com>,
Baolin Wang <baolin.wang@...ux.alibaba.com>,
David Hildenbrand <david@...hat.com>,
Hugh Dickins <hughd@...gle.com>,
James Houghton <jthoughton@...gle.com>,
"Liam R. Howlett" <Liam.Howlett@...cle.com>,
Lorenzo Stoakes <lorenzo.stoakes@...cle.com>,
Michal Hocko <mhocko@...e.com>,
Mike Rapoport <rppt@...nel.org>,
Muchun Song <muchun.song@...ux.dev>,
Nikita Kalyazin <kalyazin@...zon.com>,
Oscar Salvador <osalvador@...e.de>,
Paolo Bonzini <pbonzini@...hat.com>,
Peter Xu <peterx@...hat.com>,
Sean Christopherson <seanjc@...gle.com>,
Shuah Khan <shuah@...nel.org>,
Suren Baghdasaryan <surenb@...gle.com>,
Vlastimil Babka <vbabka@...e.cz>,
linux-kernel@...r.kernel.org,
kvm@...r.kernel.org,
linux-kselftest@...r.kernel.org
Subject: [PATCH RFC 01/17] userfaultfd: introduce mfill_copy_folio_locked() helper
From: "Mike Rapoport (Microsoft)" <rppt@...nel.org>
Split copying of data when locks held from mfill_atomic_pte_copy() into
a helper function mfill_copy_folio_locked().
This makes improves code readability and makes complex
mfill_atomic_pte_copy() function easier to comprehend.
No functional change.
Signed-off-by: Mike Rapoport (Microsoft) <rppt@...nel.org>
---
mm/userfaultfd.c | 59 ++++++++++++++++++++++++++++--------------------
1 file changed, 35 insertions(+), 24 deletions(-)
diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c
index e6dfd5f28acd..a0885d543f22 100644
--- a/mm/userfaultfd.c
+++ b/mm/userfaultfd.c
@@ -238,6 +238,40 @@ int mfill_atomic_install_pte(pmd_t *dst_pmd,
return ret;
}
+static int mfill_copy_folio_locked(struct folio *folio, unsigned long src_addr)
+{
+ void *kaddr;
+ int ret;
+
+ kaddr = kmap_local_folio(folio, 0);
+ /*
+ * The read mmap_lock is held here. Despite the
+ * mmap_lock being read recursive a deadlock is still
+ * possible if a writer has taken a lock. For example:
+ *
+ * process A thread 1 takes read lock on own mmap_lock
+ * process A thread 2 calls mmap, blocks taking write lock
+ * process B thread 1 takes page fault, read lock on own mmap lock
+ * process B thread 2 calls mmap, blocks taking write lock
+ * process A thread 1 blocks taking read lock on process B
+ * process B thread 1 blocks taking read lock on process A
+ *
+ * Disable page faults to prevent potential deadlock
+ * and retry the copy outside the mmap_lock.
+ */
+ pagefault_disable();
+ ret = copy_from_user(kaddr, (const void __user *) src_addr,
+ PAGE_SIZE);
+ pagefault_enable();
+ kunmap_local(kaddr);
+
+ if (ret)
+ return -EFAULT;
+
+ flush_dcache_folio(folio);
+ return ret;
+}
+
static int mfill_atomic_pte_copy(pmd_t *dst_pmd,
struct vm_area_struct *dst_vma,
unsigned long dst_addr,
@@ -245,7 +279,6 @@ static int mfill_atomic_pte_copy(pmd_t *dst_pmd,
uffd_flags_t flags,
struct folio **foliop)
{
- void *kaddr;
int ret;
struct folio *folio;
@@ -256,27 +289,7 @@ static int mfill_atomic_pte_copy(pmd_t *dst_pmd,
if (!folio)
goto out;
- kaddr = kmap_local_folio(folio, 0);
- /*
- * The read mmap_lock is held here. Despite the
- * mmap_lock being read recursive a deadlock is still
- * possible if a writer has taken a lock. For example:
- *
- * process A thread 1 takes read lock on own mmap_lock
- * process A thread 2 calls mmap, blocks taking write lock
- * process B thread 1 takes page fault, read lock on own mmap lock
- * process B thread 2 calls mmap, blocks taking write lock
- * process A thread 1 blocks taking read lock on process B
- * process B thread 1 blocks taking read lock on process A
- *
- * Disable page faults to prevent potential deadlock
- * and retry the copy outside the mmap_lock.
- */
- pagefault_disable();
- ret = copy_from_user(kaddr, (const void __user *) src_addr,
- PAGE_SIZE);
- pagefault_enable();
- kunmap_local(kaddr);
+ ret = mfill_copy_folio_locked(folio, src_addr);
/* fallback to copy_from_user outside mmap_lock */
if (unlikely(ret)) {
@@ -285,8 +298,6 @@ static int mfill_atomic_pte_copy(pmd_t *dst_pmd,
/* don't free the page */
goto out;
}
-
- flush_dcache_folio(folio);
} else {
folio = *foliop;
*foliop = NULL;
--
2.51.0
Powered by blists - more mailing lists