lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aYI0HmP-XZNBI-gb@x1.local>
Date: Tue, 3 Feb 2026 12:45:02 -0500
From: Peter Xu <peterx@...hat.com>
To: Mike Rapoport <rppt@...nel.org>
Cc: linux-mm@...ck.org, Andrea Arcangeli <aarcange@...hat.com>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Axel Rasmussen <axelrasmussen@...gle.com>,
	Baolin Wang <baolin.wang@...ux.alibaba.com>,
	David Hildenbrand <david@...hat.com>,
	Hugh Dickins <hughd@...gle.com>,
	James Houghton <jthoughton@...gle.com>,
	"Liam R. Howlett" <Liam.Howlett@...cle.com>,
	Lorenzo Stoakes <lorenzo.stoakes@...cle.com>,
	Michal Hocko <mhocko@...e.com>, Muchun Song <muchun.song@...ux.dev>,
	Nikita Kalyazin <kalyazin@...zon.com>,
	Oscar Salvador <osalvador@...e.de>,
	Paolo Bonzini <pbonzini@...hat.com>,
	Sean Christopherson <seanjc@...gle.com>,
	Shuah Khan <shuah@...nel.org>,
	Suren Baghdasaryan <surenb@...gle.com>,
	Vlastimil Babka <vbabka@...e.cz>, linux-kernel@...r.kernel.org,
	kvm@...r.kernel.org, linux-kselftest@...r.kernel.org
Subject: Re: [PATCH RFC 01/17] userfaultfd: introduce
 mfill_copy_folio_locked() helper

On Tue, Jan 27, 2026 at 09:29:20PM +0200, Mike Rapoport wrote:
> From: "Mike Rapoport (Microsoft)" <rppt@...nel.org>
> 
> Split copying of data when locks held from mfill_atomic_pte_copy() into
> a helper function mfill_copy_folio_locked().
> 
> This makes improves code readability and makes complex
> mfill_atomic_pte_copy() function easier to comprehend.
> 
> No functional change.
> 
> Signed-off-by: Mike Rapoport (Microsoft) <rppt@...nel.org>

The movement looks all fine,

Acked-by: Peter Xu <peterx@...hat.com>

Just one pure question to ask.

> ---
>  mm/userfaultfd.c | 59 ++++++++++++++++++++++++++++--------------------
>  1 file changed, 35 insertions(+), 24 deletions(-)
> 
> diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c
> index e6dfd5f28acd..a0885d543f22 100644
> --- a/mm/userfaultfd.c
> +++ b/mm/userfaultfd.c
> @@ -238,6 +238,40 @@ int mfill_atomic_install_pte(pmd_t *dst_pmd,
>  	return ret;
>  }
>  
> +static int mfill_copy_folio_locked(struct folio *folio, unsigned long src_addr)
> +{
> +	void *kaddr;
> +	int ret;
> +
> +	kaddr = kmap_local_folio(folio, 0);
> +	/*
> +	 * The read mmap_lock is held here.  Despite the
> +	 * mmap_lock being read recursive a deadlock is still
> +	 * possible if a writer has taken a lock.  For example:
> +	 *
> +	 * process A thread 1 takes read lock on own mmap_lock
> +	 * process A thread 2 calls mmap, blocks taking write lock
> +	 * process B thread 1 takes page fault, read lock on own mmap lock
> +	 * process B thread 2 calls mmap, blocks taking write lock
> +	 * process A thread 1 blocks taking read lock on process B
> +	 * process B thread 1 blocks taking read lock on process A

While moving, I wonder if we need this complex use case to describe the
deadlock.  Shouldn't this already happen with 1 process only?

  process A thread 1 takes read lock (e.g. reaching here but
                     before copy_from_user)
  process A thread 2 calls mmap, blocks taking write lock
  process A thread 1 goes on copy_from_user(), trigger page fault,
                     then tries to re-take the read lock

IIUC above should already cause deadlock when rwsem prioritize the write
lock here.

> +	 *
> +	 * Disable page faults to prevent potential deadlock
> +	 * and retry the copy outside the mmap_lock.
> +	 */
> +	pagefault_disable();
> +	ret = copy_from_user(kaddr, (const void __user *) src_addr,
> +			     PAGE_SIZE);
> +	pagefault_enable();
> +	kunmap_local(kaddr);
> +
> +	if (ret)
> +		return -EFAULT;
> +
> +	flush_dcache_folio(folio);
> +	return ret;
> +}
> +
>  static int mfill_atomic_pte_copy(pmd_t *dst_pmd,
>  				 struct vm_area_struct *dst_vma,
>  				 unsigned long dst_addr,
> @@ -245,7 +279,6 @@ static int mfill_atomic_pte_copy(pmd_t *dst_pmd,
>  				 uffd_flags_t flags,
>  				 struct folio **foliop)
>  {
> -	void *kaddr;
>  	int ret;
>  	struct folio *folio;
>  
> @@ -256,27 +289,7 @@ static int mfill_atomic_pte_copy(pmd_t *dst_pmd,
>  		if (!folio)
>  			goto out;
>  
> -		kaddr = kmap_local_folio(folio, 0);
> -		/*
> -		 * The read mmap_lock is held here.  Despite the
> -		 * mmap_lock being read recursive a deadlock is still
> -		 * possible if a writer has taken a lock.  For example:
> -		 *
> -		 * process A thread 1 takes read lock on own mmap_lock
> -		 * process A thread 2 calls mmap, blocks taking write lock
> -		 * process B thread 1 takes page fault, read lock on own mmap lock
> -		 * process B thread 2 calls mmap, blocks taking write lock
> -		 * process A thread 1 blocks taking read lock on process B
> -		 * process B thread 1 blocks taking read lock on process A
> -		 *
> -		 * Disable page faults to prevent potential deadlock
> -		 * and retry the copy outside the mmap_lock.
> -		 */
> -		pagefault_disable();
> -		ret = copy_from_user(kaddr, (const void __user *) src_addr,
> -				     PAGE_SIZE);
> -		pagefault_enable();
> -		kunmap_local(kaddr);
> +		ret = mfill_copy_folio_locked(folio, src_addr);
>  
>  		/* fallback to copy_from_user outside mmap_lock */
>  		if (unlikely(ret)) {
> @@ -285,8 +298,6 @@ static int mfill_atomic_pte_copy(pmd_t *dst_pmd,
>  			/* don't free the page */
>  			goto out;
>  		}
> -
> -		flush_dcache_folio(folio);
>  	} else {
>  		folio = *foliop;
>  		*foliop = NULL;
> -- 
> 2.51.0
> 

-- 
Peter Xu


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ