[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20231219075538.414708-8-peterx@redhat.com>
Date: Tue, 19 Dec 2023 15:55:32 +0800
From: peterx@...hat.com
To: linux-kernel@...r.kernel.org,
linux-mm@...ck.org
Cc: Matthew Wilcox <willy@...radead.org>,
Christophe Leroy <christophe.leroy@...roup.eu>,
Lorenzo Stoakes <lstoakes@...il.com>,
David Hildenbrand <david@...hat.com>,
Vlastimil Babka <vbabka@...e.cz>,
Mike Kravetz <mike.kravetz@...cle.com>,
Mike Rapoport <rppt@...nel.org>,
Christoph Hellwig <hch@...radead.org>,
John Hubbard <jhubbard@...dia.com>,
Andrew Jones <andrew.jones@...ux.dev>,
linux-arm-kernel@...ts.infradead.org,
Michael Ellerman <mpe@...erman.id.au>,
"Kirill A . Shutemov" <kirill@...temov.name>,
linuxppc-dev@...ts.ozlabs.org,
Rik van Riel <riel@...riel.com>,
linux-riscv@...ts.infradead.org,
Yang Shi <shy828301@...il.com>,
James Houghton <jthoughton@...gle.com>,
"Aneesh Kumar K . V" <aneesh.kumar@...nel.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Jason Gunthorpe <jgg@...dia.com>,
Andrea Arcangeli <aarcange@...hat.com>,
peterx@...hat.com,
Axel Rasmussen <axelrasmussen@...gle.com>
Subject: [PATCH 07/13] mm/gup: Refactor record_subpages() to find 1st small page
From: Peter Xu <peterx@...hat.com>
All the fast-gup functions take a tail page to operate, always need to do
page mask calculations before feeding that into record_subpages().
Merge that logic into record_subpages(), so that it will do the nth_page()
calculation.
Signed-off-by: Peter Xu <peterx@...hat.com>
---
mm/gup.c | 25 ++++++++++++++-----------
1 file changed, 14 insertions(+), 11 deletions(-)
diff --git a/mm/gup.c b/mm/gup.c
index bb5b7134f10b..82d28d517d0d 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -2767,13 +2767,16 @@ static int __gup_device_huge_pud(pud_t pud, pud_t *pudp, unsigned long addr,
}
#endif
-static int record_subpages(struct page *page, unsigned long addr,
- unsigned long end, struct page **pages)
+static int record_subpages(struct page *page, unsigned long sz,
+ unsigned long addr, unsigned long end,
+ struct page **pages)
{
+ struct page *start_page;
int nr;
+ start_page = nth_page(page, (addr & (sz - 1)) >> PAGE_SHIFT);
for (nr = 0; addr != end; nr++, addr += PAGE_SIZE)
- pages[nr] = nth_page(page, nr);
+ pages[nr] = nth_page(start_page, nr);
return nr;
}
@@ -2808,8 +2811,8 @@ static int gup_hugepte(pte_t *ptep, unsigned long sz, unsigned long addr,
/* hugepages are never "special" */
VM_BUG_ON(!pfn_valid(pte_pfn(pte)));
- page = nth_page(pte_page(pte), (addr & (sz - 1)) >> PAGE_SHIFT);
- refs = record_subpages(page, addr, end, pages + *nr);
+ page = pte_page(pte);
+ refs = record_subpages(page, sz, addr, end, pages + *nr);
folio = try_grab_folio(page, refs, flags);
if (!folio)
@@ -2882,8 +2885,8 @@ static int gup_huge_pmd(pmd_t orig, pmd_t *pmdp, unsigned long addr,
pages, nr);
}
- page = nth_page(pmd_page(orig), (addr & ~PMD_MASK) >> PAGE_SHIFT);
- refs = record_subpages(page, addr, end, pages + *nr);
+ page = pmd_page(orig);
+ refs = record_subpages(page, PMD_SIZE, addr, end, pages + *nr);
folio = try_grab_folio(page, refs, flags);
if (!folio)
@@ -2926,8 +2929,8 @@ static int gup_huge_pud(pud_t orig, pud_t *pudp, unsigned long addr,
pages, nr);
}
- page = nth_page(pud_page(orig), (addr & ~PUD_MASK) >> PAGE_SHIFT);
- refs = record_subpages(page, addr, end, pages + *nr);
+ page = pud_page(orig);
+ refs = record_subpages(page, PUD_SIZE, addr, end, pages + *nr);
folio = try_grab_folio(page, refs, flags);
if (!folio)
@@ -2966,8 +2969,8 @@ static int gup_huge_pgd(pgd_t orig, pgd_t *pgdp, unsigned long addr,
BUILD_BUG_ON(pgd_devmap(orig));
- page = nth_page(pgd_page(orig), (addr & ~PGDIR_MASK) >> PAGE_SHIFT);
- refs = record_subpages(page, addr, end, pages + *nr);
+ page = pgd_page(orig);
+ refs = record_subpages(page, PGDIR_SIZE, addr, end, pages + *nr);
folio = try_grab_folio(page, refs, flags);
if (!folio)
--
2.41.0
Powered by blists - more mailing lists