[<prev] [next>] [day] [month] [year] [list]
Message-ID: <20221110172246.32792d6a@canb.auug.org.au>
Date: Thu, 10 Nov 2022 17:22:46 +1100
From: Stephen Rothwell <sfr@...b.auug.org.au>
To: Andrew Morton <akpm@...ux-foundation.org>,
Jens Axboe <axboe@...nel.dk>
Cc: Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Linux Next Mailing List <linux-next@...r.kernel.org>,
Logan Gunthorpe <logang@...tatee.com>,
Mike Kravetz <mike.kravetz@...cle.com>
Subject: linux-next: manual merge of the mm-stable tree with the block tree
Hi all,
Today's linux-next merge of the mm-stable tree got a conflict in:
mm/hugetlb.c
between commit:
0f0892356fa1 ("mm: allow multiple error returns in try_grab_page()")
from the block tree and commit:
57a196a58421 ("hugetlb: simplify hugetlb handling in follow_page_mask")
from the mm-stable tree.
I fixed it up (I think - see below) and can carry the fix as necessary.
This is now fixed as far as linux-next is concerned, but any non trivial
conflicts should be mentioned to your upstream maintainer when your tree
is submitted for merging. You may also want to consider cooperating
with the maintainer of the conflicting tree to minimise any particularly
complex conflicts.
--
Cheers,
Stephen Rothwell
diff --cc mm/hugetlb.c
index 3373d24e4a97,fdb36afea2b2..000000000000
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@@ -6222,6 -6199,62 +6212,62 @@@ static inline bool __follow_hugetlb_mus
return false;
}
+ struct page *hugetlb_follow_page_mask(struct vm_area_struct *vma,
+ unsigned long address, unsigned int flags)
+ {
+ struct hstate *h = hstate_vma(vma);
+ struct mm_struct *mm = vma->vm_mm;
+ unsigned long haddr = address & huge_page_mask(h);
+ struct page *page = NULL;
+ spinlock_t *ptl;
+ pte_t *pte, entry;
+
+ /*
+ * FOLL_PIN is not supported for follow_page(). Ordinary GUP goes via
+ * follow_hugetlb_page().
+ */
+ if (WARN_ON_ONCE(flags & FOLL_PIN))
+ return NULL;
+
+ retry:
+ pte = huge_pte_offset(mm, haddr, huge_page_size(h));
+ if (!pte)
+ return NULL;
+
+ ptl = huge_pte_lock(h, mm, pte);
+ entry = huge_ptep_get(pte);
+ if (pte_present(entry)) {
+ page = pte_page(entry) +
+ ((address & ~huge_page_mask(h)) >> PAGE_SHIFT);
+ /*
+ * Note that page may be a sub-page, and with vmemmap
+ * optimizations the page struct may be read only.
+ * try_grab_page() will increase the ref count on the
+ * head page, so this will be OK.
+ *
- * try_grab_page() should always succeed here, because we hold
- * the ptl lock and have verified pte_present().
++ * try_grab_page() should always be able to get the page here,
++ * because we hold the ptl lock and have verified pte_present().
+ */
- if (WARN_ON_ONCE(!try_grab_page(page, flags))) {
++ if (try_grab_page(page, flags)) {
+ page = NULL;
+ goto out;
+ }
+ } else {
+ if (is_hugetlb_entry_migration(entry)) {
+ spin_unlock(ptl);
+ __migration_entry_wait_huge(pte, ptl);
+ goto retry;
+ }
+ /*
+ * hwpoisoned entry is treated as no_page_table in
+ * follow_page_mask().
+ */
+ }
+ out:
+ spin_unlock(ptl);
+ return page;
+ }
+
long follow_hugetlb_page(struct mm_struct *mm, struct vm_area_struct *vma,
struct page **pages, struct vm_area_struct **vmas,
unsigned long *position, unsigned long *nr_pages,
Content of type "application/pgp-signature" skipped
Powered by blists - more mailing lists