[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190221181931.GS2813@redhat.com>
Date: Thu, 21 Feb 2019 13:19:32 -0500
From: Jerome Glisse <jglisse@...hat.com>
To: Peter Xu <peterx@...hat.com>
Cc: linux-mm@...ck.org, linux-kernel@...r.kernel.org,
David Hildenbrand <david@...hat.com>,
Hugh Dickins <hughd@...gle.com>,
Maya Gokhale <gokhale2@...l.gov>,
Pavel Emelyanov <xemul@...tuozzo.com>,
Johannes Weiner <hannes@...xchg.org>,
Martin Cracauer <cracauer@...s.org>, Shaohua Li <shli@...com>,
Marty McFadden <mcfadden8@...l.gov>,
Andrea Arcangeli <aarcange@...hat.com>,
Mike Kravetz <mike.kravetz@...cle.com>,
Denis Plotnikov <dplotnikov@...tuozzo.com>,
Mike Rapoport <rppt@...ux.vnet.ibm.com>,
Mel Gorman <mgorman@...e.de>,
"Kirill A . Shutemov" <kirill@...temov.name>,
"Dr . David Alan Gilbert" <dgilbert@...hat.com>
Subject: Re: [PATCH v2 19/26] userfaultfd: introduce helper vma_find_uffd
On Tue, Feb 12, 2019 at 10:56:25AM +0800, Peter Xu wrote:
> We've have multiple (and more coming) places that would like to find a
> userfault enabled VMA from a mm struct that covers a specific memory
> range. This patch introduce the helper for it, meanwhile apply it to
> the code.
>
> Suggested-by: Mike Rapoport <rppt@...ux.vnet.ibm.com>
> Signed-off-by: Peter Xu <peterx@...hat.com>
Reviewed-by: Jérôme Glisse <jglisse@...hat.com>
> ---
> mm/userfaultfd.c | 54 +++++++++++++++++++++++++++---------------------
> 1 file changed, 30 insertions(+), 24 deletions(-)
>
> diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c
> index 80bcd642911d..fefa81c301b7 100644
> --- a/mm/userfaultfd.c
> +++ b/mm/userfaultfd.c
> @@ -20,6 +20,34 @@
> #include <asm/tlbflush.h>
> #include "internal.h"
>
> +/*
> + * Find a valid userfault enabled VMA region that covers the whole
> + * address range, or NULL on failure. Must be called with mmap_sem
> + * held.
> + */
> +static struct vm_area_struct *vma_find_uffd(struct mm_struct *mm,
> + unsigned long start,
> + unsigned long len)
> +{
> + struct vm_area_struct *vma = find_vma(mm, start);
> +
> + if (!vma)
> + return NULL;
> +
> + /*
> + * Check the vma is registered in uffd, this is required to
> + * enforce the VM_MAYWRITE check done at uffd registration
> + * time.
> + */
> + if (!vma->vm_userfaultfd_ctx.ctx)
> + return NULL;
> +
> + if (start < vma->vm_start || start + len > vma->vm_end)
> + return NULL;
> +
> + return vma;
> +}
> +
> static int mcopy_atomic_pte(struct mm_struct *dst_mm,
> pmd_t *dst_pmd,
> struct vm_area_struct *dst_vma,
> @@ -228,20 +256,9 @@ static __always_inline ssize_t __mcopy_atomic_hugetlb(struct mm_struct *dst_mm,
> */
> if (!dst_vma) {
> err = -ENOENT;
> - dst_vma = find_vma(dst_mm, dst_start);
> + dst_vma = vma_find_uffd(dst_mm, dst_start, len);
> if (!dst_vma || !is_vm_hugetlb_page(dst_vma))
> goto out_unlock;
> - /*
> - * Check the vma is registered in uffd, this is
> - * required to enforce the VM_MAYWRITE check done at
> - * uffd registration time.
> - */
> - if (!dst_vma->vm_userfaultfd_ctx.ctx)
> - goto out_unlock;
> -
> - if (dst_start < dst_vma->vm_start ||
> - dst_start + len > dst_vma->vm_end)
> - goto out_unlock;
>
> err = -EINVAL;
> if (vma_hpagesize != vma_kernel_pagesize(dst_vma))
> @@ -488,20 +505,9 @@ static __always_inline ssize_t __mcopy_atomic(struct mm_struct *dst_mm,
> * both valid and fully within a single existing vma.
> */
> err = -ENOENT;
> - dst_vma = find_vma(dst_mm, dst_start);
> + dst_vma = vma_find_uffd(dst_mm, dst_start, len);
> if (!dst_vma)
> goto out_unlock;
> - /*
> - * Check the vma is registered in uffd, this is required to
> - * enforce the VM_MAYWRITE check done at uffd registration
> - * time.
> - */
> - if (!dst_vma->vm_userfaultfd_ctx.ctx)
> - goto out_unlock;
> -
> - if (dst_start < dst_vma->vm_start ||
> - dst_start + len > dst_vma->vm_end)
> - goto out_unlock;
>
> err = -EINVAL;
> /*
> --
> 2.17.1
>
Powered by blists - more mailing lists