lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  PHC 
Open Source and information security mailing list archives
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Sat, 19 Dec 2020 21:49:45 -0500
From:   Andrea Arcangeli <>
To:     Andy Lutomirski <>
Cc:     Nadav Amit <>,
        Dave Hansen <>,
        linux-mm <>, Peter Xu <>,
        lkml <>,
        Pavel Emelyanov <>,
        Mike Kravetz <>,
        Mike Rapoport <>,
        stable <>,
        Minchan Kim <>, Yu Zhao <>,
        Will Deacon <>,
        Peter Zijlstra <>
Subject: Re: [PATCH] mm/userfaultfd: fix memory corruption due to writeprotect

On Sat, Dec 19, 2020 at 06:01:39PM -0800, Andy Lutomirski wrote:
> I missed the beginning of this thread, but it looks to me like
> userfaultfd changes PTEs with not locking except mmap_read_lock().  It

There's no mmap_read_lock, I assume you mean mmap_lock for reading.

The ptes are changed always with the PT lock, in fact there's no
problem with the PTE updates. The only difference with mprotect
runtime is that the mmap_lock is taken for reading. And the effect
contested for this change doesn't affect the PTE, but supposedly the
tlb flushing deferral.

The change_protection_range is identical to what already happens with
zap_page_range. zap_page_range is called with mmap_lock for reading
in MADV_DONTNEED, and by munmap with mmap_lock for
writing. change_protection_range is called with mmap_lock for writing
by mprotect, and mmap_lock for reading by UFFDIO_WRITEPROTECT.

> also calls inc_tlb_flush_pending(), which is very explicitly
> documented as requiring the pagetable lock.  Those docs must be wrong,

The comment in inc_tlb_flush_pending() shows the pagetable lock is
taken after inc_tlb_flush_pending():

	 *	atomic_inc(&mm->tlb_flush_pending);
	 *	spin_lock(&ptl);

> because mprotect() uses the mmap_sem write lock, which is just fine,
> but ISTM some kind of mutual exclusion with proper acquire/release
> ordering is indeed needed.  So the userfaultfd code seems bogus.

If there's a bug, it'd be nice to fix without taking the mmap_lock for

The vma is guaranteed not modified, so I think it'd be pretty bad if
we had to give in the mmap_lock for writing just to wait for a tlb
flush that is issued deferred in the context of

> I think userfaultfd either needs to take a real lock (probably doesn't
> matter which) or the core rules about PTEs need to be rewritten.

It's not exactly clear how the do_wp_page could run on cpu1 before the
unprotect did an extra flush, I guess the trace needs one more

Anyway to wait the wrprotect to do the deferred flush, before the
unprotect can even start, one more mutex in the mm to take in all
callers of change_protection_range with the mmap_lock for reading may
be enough.


Powered by blists - more mailing lists