[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230615001113.GB38211@monkey>
Date: Wed, 14 Jun 2023 17:11:13 -0700
From: Mike Kravetz <mike.kravetz@...cle.com>
To: Peter Xu <peterx@...hat.com>
Cc: David Hildenbrand <david@...hat.com>, linux-kernel@...r.kernel.org,
linux-mm@...ck.org, Matthew Wilcox <willy@...radead.org>,
Andrea Arcangeli <aarcange@...hat.com>,
John Hubbard <jhubbard@...dia.com>,
Mike Rapoport <rppt@...nel.org>,
Vlastimil Babka <vbabka@...e.cz>,
"Kirill A . Shutemov" <kirill@...temov.name>,
Andrew Morton <akpm@...ux-foundation.org>,
James Houghton <jthoughton@...gle.com>,
Hugh Dickins <hughd@...gle.com>
Subject: Re: [PATCH 2/7] mm/hugetlb: Fix hugetlb_follow_page_mask() on
permission checks
On 06/14/23 11:46, Peter Xu wrote:
> On Wed, Jun 14, 2023 at 05:31:36PM +0200, David Hildenbrand wrote:
> > On 13.06.23 23:53, Peter Xu wrote:
>
> Then I assume no fixes /backport needed at all (which is what this patch
> already does). It's purely to be prepared only. I'll mention that in the
> new version.
Code looks fine to me. Feel free to add,
Reviewed-by: Mike Kravetz <mike.kravetz@...cle.com>
--
Mike Kravetz
> > >
> > > When at it, switching the try_grab_page() to use WARN_ON_ONCE(), to be
> > > clear that it just should never fail.
> > >
> > > Signed-off-by: Peter Xu <peterx@...hat.com>
> > > ---
> > > mm/hugetlb.c | 22 ++++++++++++++++------
> > > 1 file changed, 16 insertions(+), 6 deletions(-)
> > >
> > > diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> > > index 82dfdd96db4c..9c261921b2cf 100644
> > > --- a/mm/hugetlb.c
> > > +++ b/mm/hugetlb.c
> > > @@ -6481,8 +6481,21 @@ struct page *hugetlb_follow_page_mask(struct vm_area_struct *vma,
> > > ptl = huge_pte_lock(h, mm, pte);
> > > entry = huge_ptep_get(pte);
> > > if (pte_present(entry)) {
> > > - page = pte_page(entry) +
> > > - ((address & ~huge_page_mask(h)) >> PAGE_SHIFT);
> > > + page = pte_page(entry);
> > > +
> > > + if (gup_must_unshare(vma, flags, page)) {
> > > + /* Tell the caller to do Copy-On-Read */
> > > + page = ERR_PTR(-EMLINK);
> > > + goto out;
> > > + }
> > > +
> > > + if ((flags & FOLL_WRITE) && !pte_write(entry)) {
> > > + page = NULL;
> > > + goto out;
> > > + }
> > > +
> > > + page += ((address & ~huge_page_mask(h)) >> PAGE_SHIFT);
> > > +
> > > /*
> > > * Note that page may be a sub-page, and with vmemmap
> > > * optimizations the page struct may be read only.
> > > @@ -6492,10 +6505,7 @@ struct page *hugetlb_follow_page_mask(struct vm_area_struct *vma,
> > > * try_grab_page() should always be able to get the page here,
> > > * because we hold the ptl lock and have verified pte_present().
> > > */
> > > - if (try_grab_page(page, flags)) {
> > > - page = NULL;
> > > - goto out;
> > > - }
> > > + WARN_ON_ONCE(try_grab_page(page, flags));
> > > }
> > > out:
> > > spin_unlock(ptl);
Powered by blists - more mailing lists