lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 5 Jan 2021 16:37:27 +0100
From:   Peter Zijlstra <peterz@...radead.org>
To:     Linus Torvalds <torvalds@...ux-foundation.org>
Cc:     Andy Lutomirski <luto@...nel.org>, Peter Xu <peterx@...hat.com>,
        Nadav Amit <nadav.amit@...il.com>, Yu Zhao <yuzhao@...gle.com>,
        Andrea Arcangeli <aarcange@...hat.com>,
        linux-mm <linux-mm@...ck.org>,
        lkml <linux-kernel@...r.kernel.org>,
        Pavel Emelyanov <xemul@...nvz.org>,
        Mike Kravetz <mike.kravetz@...cle.com>,
        Mike Rapoport <rppt@...ux.vnet.ibm.com>,
        stable <stable@...r.kernel.org>,
        Minchan Kim <minchan@...nel.org>, Will Deacon <will@...nel.org>
Subject: Re: [PATCH] mm/userfaultfd: fix memory corruption due to writeprotect

On Mon, Dec 21, 2020 at 08:16:11PM -0800, Linus Torvalds wrote:

> So I think the basic rule is that "if you hold mmap_sem for writing,
> you're always safe". And that really should be considered the
> "default" locking.
> 
> ANY time you make a modification to the VM layer, you should basically
> always treat it as a write operation, and get the mmap_sem for
> writing.
> 
> Yeah, yeah, that's a bit simplified, and it ignores various special
> cases (and the hardware page table walkers that obviously take no
> locks at all), but if you hold the mmap_sem for writing you won't
> really race with anything else - not page faults, and not other
> "modify this VM".

> To a first approximation, everybody that changes the VM should take
> the mmap_sem for writing, and the readers should just be just about
> page fault handling (and I count GUP as "page fault handling" too -
> it's kind of the same "look up page" rather than "modify vm" kind of
> operation).
> 
> And there are just a _lot_ more page faults than there are things that
> modify the page tables and the vma's.
> 
> So having that mental model of "lookup of pages in a VM take mmap_semn
> for reading, any modification of the VM uses it for writing" makes
> sense both from a performance angle and a logical standpoint. It's the
> correct model.

> And it's worth noting that COW is still "lookup of pages", even though
> it might modify the page tables in the process. The same way lookup
> can modify the page tables to mark things accessed or dirty.
> 
> So COW is still a lookup operation, in ways that "change the
> writabiility of this range" very much is not. COW is "lookup for
> write", and the magic we do to copy to make that write valid is still
> all about the lookup of the page.

(your other email clarified this point; the COW needs to copy while
holding the PTL and we need TLBI under PTL if we're to change this)

> Which brings up another mental mistake I saw earlier in this thread:
> you should not think "mmap_sem is for vma, and the page table lock is
> for the page table changes".
> 
> mmap_sem is the primary lock for any modifications to the VM layout,
> whether it be in the vma's or in the page tables.
> 
> Now, the page table lock does exist _in_addition_to_ the mmap_sem, but
> it is partly because
> 
>  (a) we have things that historically walked the page tables _without_
> walking the vma's (notably the virtual memory scanning)
> 
>  (b) we do allow concurrent page faults, so we then need a lower-level
> lock to serialize the parallelism we _do_ have.

And I'm thinking the speculative page fault series steps right into all
this, it fundamentally avoids mmap_sem and entirely relies on the PTL.

Which opens it up to exactly these races explored here.

The range lock approach does not suffer this, but I'm still worried
about the actual performance of that thing.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ