[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20190212025632.28946-21-peterx@redhat.com>
Date: Tue, 12 Feb 2019 10:56:26 +0800
From: Peter Xu <peterx@...hat.com>
To: linux-mm@...ck.org, linux-kernel@...r.kernel.org
Cc: David Hildenbrand <david@...hat.com>,
Hugh Dickins <hughd@...gle.com>,
Maya Gokhale <gokhale2@...l.gov>,
Jerome Glisse <jglisse@...hat.com>,
Pavel Emelyanov <xemul@...tuozzo.com>,
Johannes Weiner <hannes@...xchg.org>, peterx@...hat.com,
Martin Cracauer <cracauer@...s.org>, Shaohua Li <shli@...com>,
Marty McFadden <mcfadden8@...l.gov>,
Andrea Arcangeli <aarcange@...hat.com>,
Mike Kravetz <mike.kravetz@...cle.com>,
Denis Plotnikov <dplotnikov@...tuozzo.com>,
Mike Rapoport <rppt@...ux.vnet.ibm.com>,
Mel Gorman <mgorman@...e.de>,
"Kirill A . Shutemov" <kirill@...temov.name>,
"Dr . David Alan Gilbert" <dgilbert@...hat.com>,
Rik van Riel <riel@...hat.com>
Subject: [PATCH v2 20/26] userfaultfd: wp: support write protection for userfault vma range
From: Shaohua Li <shli@...com>
Add API to enable/disable writeprotect a vma range. Unlike mprotect,
this doesn't split/merge vmas.
Cc: Andrea Arcangeli <aarcange@...hat.com>
Cc: Rik van Riel <riel@...hat.com>
Cc: Kirill A. Shutemov <kirill@...temov.name>
Cc: Mel Gorman <mgorman@...e.de>
Cc: Hugh Dickins <hughd@...gle.com>
Cc: Johannes Weiner <hannes@...xchg.org>
Signed-off-by: Shaohua Li <shli@...com>
Signed-off-by: Andrea Arcangeli <aarcange@...hat.com>
[peterx:
- use the helper to find VMA;
- return -ENOENT if not found to match mcopy case;
- use the new MM_CP_UFFD_WP* flags for change_protection
- check against mmap_changing for failures]
Signed-off-by: Peter Xu <peterx@...hat.com>
---
include/linux/userfaultfd_k.h | 3 ++
mm/userfaultfd.c | 54 +++++++++++++++++++++++++++++++++++
2 files changed, 57 insertions(+)
diff --git a/include/linux/userfaultfd_k.h b/include/linux/userfaultfd_k.h
index 765ce884cec0..8f6e6ed544fb 100644
--- a/include/linux/userfaultfd_k.h
+++ b/include/linux/userfaultfd_k.h
@@ -39,6 +39,9 @@ extern ssize_t mfill_zeropage(struct mm_struct *dst_mm,
unsigned long dst_start,
unsigned long len,
bool *mmap_changing);
+extern int mwriteprotect_range(struct mm_struct *dst_mm,
+ unsigned long start, unsigned long len,
+ bool enable_wp, bool *mmap_changing);
/* mm helpers */
static inline bool is_mergeable_vm_userfaultfd_ctx(struct vm_area_struct *vma,
diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c
index fefa81c301b7..529d180bb4d7 100644
--- a/mm/userfaultfd.c
+++ b/mm/userfaultfd.c
@@ -639,3 +639,57 @@ ssize_t mfill_zeropage(struct mm_struct *dst_mm, unsigned long start,
{
return __mcopy_atomic(dst_mm, start, 0, len, true, mmap_changing, 0);
}
+
+int mwriteprotect_range(struct mm_struct *dst_mm, unsigned long start,
+ unsigned long len, bool enable_wp, bool *mmap_changing)
+{
+ struct vm_area_struct *dst_vma;
+ pgprot_t newprot;
+ int err;
+
+ /*
+ * Sanitize the command parameters:
+ */
+ BUG_ON(start & ~PAGE_MASK);
+ BUG_ON(len & ~PAGE_MASK);
+
+ /* Does the address range wrap, or is the span zero-sized? */
+ BUG_ON(start + len <= start);
+
+ down_read(&dst_mm->mmap_sem);
+
+ /*
+ * If memory mappings are changing because of non-cooperative
+ * operation (e.g. mremap) running in parallel, bail out and
+ * request the user to retry later
+ */
+ err = -EAGAIN;
+ if (mmap_changing && READ_ONCE(*mmap_changing))
+ goto out_unlock;
+
+ err = -ENOENT;
+ dst_vma = vma_find_uffd(dst_mm, start, len);
+ /*
+ * Make sure the vma is not shared, that the dst range is
+ * both valid and fully within a single existing vma.
+ */
+ if (!dst_vma || (dst_vma->vm_flags & VM_SHARED))
+ goto out_unlock;
+ if (!userfaultfd_wp(dst_vma))
+ goto out_unlock;
+ if (!vma_is_anonymous(dst_vma))
+ goto out_unlock;
+
+ if (enable_wp)
+ newprot = vm_get_page_prot(dst_vma->vm_flags & ~(VM_WRITE));
+ else
+ newprot = vm_get_page_prot(dst_vma->vm_flags);
+
+ change_protection(dst_vma, start, start + len, newprot,
+ enable_wp ? MM_CP_UFFD_WP : MM_CP_UFFD_WP_RESOLVE);
+
+ err = 0;
+out_unlock:
+ up_read(&dst_mm->mmap_sem);
+ return err;
+}
--
2.17.1
Powered by blists - more mailing lists