[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20181206143003.002397136@linuxfoundation.org>
Date: Thu, 6 Dec 2018 15:38:57 +0100
From: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
To: linux-kernel@...r.kernel.org
Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
stable@...r.kernel.org, Andrea Arcangeli <aarcange@...hat.com>,
Mike Rapoport <rppt@...ux.ibm.com>,
Hugh Dickins <hughd@...gle.com>,
"Dr. David Alan Gilbert" <dgilbert@...hat.com>,
Jann Horn <jannh@...gle.com>,
Mike Kravetz <mike.kravetz@...cle.com>,
Peter Xu <peterx@...hat.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Linus Torvalds <torvalds@...ux-foundation.org>
Subject: [PATCH 4.14 23/55] userfaultfd: shmem: allocate anonymous memory for MAP_PRIVATE shmem
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Andrea Arcangeli <aarcange@...hat.com>
commit 5b51072e97d587186c2f5390c8c9c1fb7e179505 upstream.
Userfaultfd did not create private memory when UFFDIO_COPY was invoked
on a MAP_PRIVATE shmem mapping. Instead it wrote to the shmem file,
even when that had not been opened for writing. Though, fortunately,
that could only happen where there was a hole in the file.
Fix the shmem-backed implementation of UFFDIO_COPY to create private
memory for MAP_PRIVATE mappings. The hugetlbfs-backed implementation
was already correct.
This change is visible to userland, if userfaultfd has been used in
unintended ways: so it introduces a small risk of incompatibility, but
is necessary in order to respect file permissions.
An app that uses UFFDIO_COPY for anything like postcopy live migration
won't notice the difference, and in fact it'll run faster because there
will be no copy-on-write and memory waste in the tmpfs pagecache
anymore.
Userfaults on MAP_PRIVATE shmem keep triggering only on file holes like
before.
The real zeropage can also be built on a MAP_PRIVATE shmem mapping
through UFFDIO_ZEROPAGE and that's safe because the zeropage pte is
never dirty, in turn even an mprotect upgrading the vma permission from
PROT_READ to PROT_READ|PROT_WRITE won't make the zeropage pte writable.
Link: http://lkml.kernel.org/r/20181126173452.26955-3-aarcange@redhat.com
Fixes: 4c27fe4c4c84 ("userfaultfd: shmem: add shmem_mcopy_atomic_pte for userfaultfd support")
Signed-off-by: Andrea Arcangeli <aarcange@...hat.com>
Reported-by: Mike Rapoport <rppt@...ux.ibm.com>
Reviewed-by: Hugh Dickins <hughd@...gle.com>
Cc: <stable@...r.kernel.org>
Cc: "Dr. David Alan Gilbert" <dgilbert@...hat.com>
Cc: Jann Horn <jannh@...gle.com>
Cc: Mike Kravetz <mike.kravetz@...cle.com>
Cc: Peter Xu <peterx@...hat.com>
Cc: stable@...r.kernel.org
Signed-off-by: Andrew Morton <akpm@...ux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@...ux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
---
mm/userfaultfd.c | 15 +++++++++++++--
1 file changed, 13 insertions(+), 2 deletions(-)
--- a/mm/userfaultfd.c
+++ b/mm/userfaultfd.c
@@ -381,7 +381,17 @@ static __always_inline ssize_t mfill_ato
{
ssize_t err;
- if (vma_is_anonymous(dst_vma)) {
+ /*
+ * The normal page fault path for a shmem will invoke the
+ * fault, fill the hole in the file and COW it right away. The
+ * result generates plain anonymous memory. So when we are
+ * asked to fill an hole in a MAP_PRIVATE shmem mapping, we'll
+ * generate anonymous memory directly without actually filling
+ * the hole. For the MAP_PRIVATE case the robustness check
+ * only happens in the pagetable (to verify it's still none)
+ * and not in the radix tree.
+ */
+ if (!(dst_vma->vm_flags & VM_SHARED)) {
if (!zeropage)
err = mcopy_atomic_pte(dst_mm, dst_pmd, dst_vma,
dst_addr, src_addr, page);
@@ -480,7 +490,8 @@ retry:
* dst_vma.
*/
err = -ENOMEM;
- if (vma_is_anonymous(dst_vma) && unlikely(anon_vma_prepare(dst_vma)))
+ if (!(dst_vma->vm_flags & VM_SHARED) &&
+ unlikely(anon_vma_prepare(dst_vma)))
goto out_unlock;
while (src_addr < src_start + len) {
Powered by blists - more mailing lists