lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 22 Feb 2019 12:41:05 +0800
From:   Peter Xu <peterx@...hat.com>
To:     Jerome Glisse <jglisse@...hat.com>
Cc:     linux-mm@...ck.org, linux-kernel@...r.kernel.org,
        David Hildenbrand <david@...hat.com>,
        Hugh Dickins <hughd@...gle.com>,
        Maya Gokhale <gokhale2@...l.gov>,
        Pavel Emelyanov <xemul@...tuozzo.com>,
        Johannes Weiner <hannes@...xchg.org>,
        Martin Cracauer <cracauer@...s.org>, Shaohua Li <shli@...com>,
        Marty McFadden <mcfadden8@...l.gov>,
        Andrea Arcangeli <aarcange@...hat.com>,
        Mike Kravetz <mike.kravetz@...cle.com>,
        Denis Plotnikov <dplotnikov@...tuozzo.com>,
        Mike Rapoport <rppt@...ux.vnet.ibm.com>,
        Mel Gorman <mgorman@...e.de>,
        "Kirill A . Shutemov" <kirill@...temov.name>,
        "Dr . David Alan Gilbert" <dgilbert@...hat.com>
Subject: Re: [PATCH v2 05/26] mm: gup: allow VM_FAULT_RETRY for multiple times

On Thu, Feb 21, 2019 at 11:06:55AM -0500, Jerome Glisse wrote:
> On Tue, Feb 12, 2019 at 10:56:11AM +0800, Peter Xu wrote:
> > This is the gup counterpart of the change that allows the VM_FAULT_RETRY
> > to happen for more than once.
> > 
> > Signed-off-by: Peter Xu <peterx@...hat.com>
> 
> Reviewed-by: Jérôme Glisse <jglisse@...hat.com>

Thanks for the r-b, Jerome!

Though I plan to change this patch a bit because I just noticed that I
didn't touch up the hugetlbfs path for GUP.  Though it was not needed
for now because hugetlbfs is not yet supported but I think maybe I'd
better do that as well in this same patch to make follow up works
easier on hugetlb, and the patch will be more self contained.  The new
version will simply squash below change into current patch:

diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index e3c738bde72e..a8eace2d5296 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -4257,8 +4257,10 @@ long follow_hugetlb_page(struct mm_struct *mm, struct vm_area_struct *vma,
                                fault_flags |= FAULT_FLAG_ALLOW_RETRY |
                                        FAULT_FLAG_RETRY_NOWAIT;
                        if (flags & FOLL_TRIED) {
-                               VM_WARN_ON_ONCE(fault_flags &
-                                               FAULT_FLAG_ALLOW_RETRY);
+                               /*
+                                * Note: FAULT_FLAG_ALLOW_RETRY and
+                                * FAULT_FLAG_TRIED can co-exist
+                                */
                                fault_flags |= FAULT_FLAG_TRIED;
                        }
                        ret = hugetlb_fault(mm, vma, vaddr, fault_flags);

I'd say this change is straightforward (it's the same as the
faultin_page below but just for hugetlbfs).  Please let me know if you
still want to offer the r-b with above change squashed (I'll be more
than glad to take it!), or I'll just wait for your review comment when
I post the next version.

Thanks,

> 
> > ---
> >  mm/gup.c | 17 +++++++++++++----
> >  1 file changed, 13 insertions(+), 4 deletions(-)
> > 
> > diff --git a/mm/gup.c b/mm/gup.c
> > index fa75a03204c1..ba387aec0d80 100644
> > --- a/mm/gup.c
> > +++ b/mm/gup.c
> > @@ -528,7 +528,10 @@ static int faultin_page(struct task_struct *tsk, struct vm_area_struct *vma,
> >  	if (*flags & FOLL_NOWAIT)
> >  		fault_flags |= FAULT_FLAG_ALLOW_RETRY | FAULT_FLAG_RETRY_NOWAIT;
> >  	if (*flags & FOLL_TRIED) {
> > -		VM_WARN_ON_ONCE(fault_flags & FAULT_FLAG_ALLOW_RETRY);
> > +		/*
> > +		 * Note: FAULT_FLAG_ALLOW_RETRY and FAULT_FLAG_TRIED
> > +		 * can co-exist
> > +		 */
> >  		fault_flags |= FAULT_FLAG_TRIED;
> >  	}
> >  
> > @@ -943,17 +946,23 @@ static __always_inline long __get_user_pages_locked(struct task_struct *tsk,
> >  		/* VM_FAULT_RETRY triggered, so seek to the faulting offset */
> >  		pages += ret;
> >  		start += ret << PAGE_SHIFT;
> > +		lock_dropped = true;
> >  
> > +retry:
> >  		/*
> >  		 * Repeat on the address that fired VM_FAULT_RETRY
> > -		 * without FAULT_FLAG_ALLOW_RETRY but with
> > +		 * with both FAULT_FLAG_ALLOW_RETRY and
> >  		 * FAULT_FLAG_TRIED.
> >  		 */
> >  		*locked = 1;
> > -		lock_dropped = true;
> >  		down_read(&mm->mmap_sem);
> >  		ret = __get_user_pages(tsk, mm, start, 1, flags | FOLL_TRIED,
> > -				       pages, NULL, NULL);
> > +				       pages, NULL, locked);
> > +		if (!*locked) {
> > +			/* Continue to retry until we succeeded */
> > +			BUG_ON(ret != 0);
> > +			goto retry;
> > +		}
> >  		if (ret != 1) {
> >  			BUG_ON(ret > 1);
> >  			if (!pages_done)
> > -- 
> > 2.17.1
> > 

-- 
Peter Xu

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ